This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-08 04:09
Elapsed34m42s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0908 04:10:02.218005    4101 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0908 04:10:02.245718    4101 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.23.0-alpha.2+v1.23.0-alpha.1-83-g8dee92b2f3/linux/amd64/kops
I0908 04:10:03.069715    4101 up.go:43] Cleaning up any leaked resources from previous cluster
I0908 04:10:03.069748    4101 dumplogs.go:38] /logs/artifacts/776e64d1-105a-11ec-816d-469f625e385c/kops toolbox dump --name e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0908 04:10:03.088257    4122 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 04:10:03.088352    4122 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io" not found
W0908 04:10:03.671694    4101 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0908 04:10:03.671746    4101 down.go:48] /logs/artifacts/776e64d1-105a-11ec-816d-469f625e385c/kops delete cluster --name e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --yes
I0908 04:10:03.692165    4132 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 04:10:03.692278    4132 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io" not found
I0908 04:10:04.212924    4101 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/08 04:10:04 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0908 04:10:04.220374    4101 http.go:37] curl https://ip.jsb.workers.dev
I0908 04:10:04.306661    4101 up.go:144] /logs/artifacts/776e64d1-105a-11ec-816d-469f625e385c/kops create cluster --name e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.4 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210825 --channel=alpha --networking=cilium --container-runtime=docker --admin-access 34.67.10.69/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-2a --master-size c5.large
I0908 04:10:04.323480    4142 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 04:10:04.323587    4142 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0908 04:10:04.350128    4142 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0908 04:10:04.990377    4142 new_cluster.go:1052]  Cloud Provider ID = aws
... skipping 42 lines ...

I0908 04:10:36.057398    4101 up.go:181] /logs/artifacts/776e64d1-105a-11ec-816d-469f625e385c/kops validate cluster --name e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0908 04:10:36.075929    4162 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0908 04:10:36.076030    4162 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io

W0908 04:10:37.654505    4162 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0908 04:10:47.790539    4162 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:10:57.828058    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:11:07.868678    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:11:17.931367    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:11:27.971672    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:11:38.005269    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:11:48.051818    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
W0908 04:11:58.082863    4162 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:12:08.126661    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:12:18.161001    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:12:28.221482    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:12:38.280590    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:12:48.317037    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:12:58.356631    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:13:08.388004    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:13:18.422392    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:13:28.482671    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:13:38.517103    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
W0908 04:13:48.570376    4162 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0908 04:13:58.614584    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 8 lines ...
Machine	i-09adeb68df49b2ff0				machine "i-09adeb68df49b2ff0" has not yet joined cluster
Machine	i-0fe90adb1a5729ec3				machine "i-0fe90adb1a5729ec3" has not yet joined cluster
Pod	kube-system/cilium-9v9cw			system-node-critical pod "cilium-9v9cw" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-lkrf6		system-cluster-critical pod "coredns-5dc785954d-lkrf6" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-djqzn	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-djqzn" is pending

Validation Failed
W0908 04:14:12.802831    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 14 lines ...
Pod	kube-system/cilium-gd48d				system-node-critical pod "cilium-gd48d" is pending
Pod	kube-system/cilium-lvkgh				system-node-critical pod "cilium-lvkgh" is pending
Pod	kube-system/cilium-r2gzk				system-node-critical pod "cilium-r2gzk" is pending
Pod	kube-system/coredns-5dc785954d-lkrf6			system-cluster-critical pod "coredns-5dc785954d-lkrf6" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-djqzn		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-djqzn" is pending

Validation Failed
W0908 04:14:25.571459    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 16 lines ...
Pod	kube-system/cilium-hdphv				system-node-critical pod "cilium-hdphv" is pending
Pod	kube-system/cilium-lvkgh				system-node-critical pod "cilium-lvkgh" is not ready (cilium-agent)
Pod	kube-system/cilium-r2gzk				system-node-critical pod "cilium-r2gzk" is pending
Pod	kube-system/coredns-5dc785954d-lkrf6			system-cluster-critical pod "coredns-5dc785954d-lkrf6" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-djqzn		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-djqzn" is pending

Validation Failed
W0908 04:14:38.384766    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 14 lines ...
Pod	kube-system/cilium-hdphv				system-node-critical pod "cilium-hdphv" is not ready (cilium-agent)
Pod	kube-system/cilium-lvkgh				system-node-critical pod "cilium-lvkgh" is not ready (cilium-agent)
Pod	kube-system/cilium-r2gzk				system-node-critical pod "cilium-r2gzk" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-lkrf6			system-cluster-critical pod "coredns-5dc785954d-lkrf6" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-djqzn		system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-djqzn" is pending

Validation Failed
W0908 04:14:51.132919    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 12 lines ...
Pod	kube-system/cilium-hdphv		system-node-critical pod "cilium-hdphv" is not ready (cilium-agent)
Pod	kube-system/cilium-lvkgh		system-node-critical pod "cilium-lvkgh" is not ready (cilium-agent)
Pod	kube-system/cilium-r2gzk		system-node-critical pod "cilium-r2gzk" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-7r2x7	system-cluster-critical pod "coredns-5dc785954d-7r2x7" is pending
Pod	kube-system/coredns-5dc785954d-lkrf6	system-cluster-critical pod "coredns-5dc785954d-lkrf6" is pending

Validation Failed
W0908 04:15:03.841128    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 9 lines ...
KIND	NAME				MESSAGE
Pod	kube-system/cilium-gd48d	system-node-critical pod "cilium-gd48d" is not ready (cilium-agent)
Pod	kube-system/cilium-hdphv	system-node-critical pod "cilium-hdphv" is not ready (cilium-agent)
Pod	kube-system/cilium-lvkgh	system-node-critical pod "cilium-lvkgh" is not ready (cilium-agent)
Pod	kube-system/cilium-r2gzk	system-node-critical pod "cilium-r2gzk" is not ready (cilium-agent)

Validation Failed
W0908 04:15:16.485530    4162 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 990 lines ...
STEP: Destroying namespace "services-7545" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:17:57.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-1403" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:17:57.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5826" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 53 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:17:57.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:18:01.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8033" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:01.409: INFO: Only supported for providers [vsphere] (not aws)
... skipping 98 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.435 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Sep  8 04:17:54.386: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-b79de94d-1ebc-4b16-8b59-7bf8c7dd2cd2
STEP: Creating a pod to test consume secrets
Sep  8 04:17:55.024: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928" in namespace "projected-3290" to be "Succeeded or Failed"
Sep  8 04:17:55.183: INFO: Pod "pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928": Phase="Pending", Reason="", readiness=false. Elapsed: 159.517019ms
Sep  8 04:17:57.343: INFO: Pod "pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318712643s
Sep  8 04:17:59.502: INFO: Pod "pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47783892s
Sep  8 04:18:01.661: INFO: Pod "pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.637190053s
STEP: Saw pod success
Sep  8 04:18:01.661: INFO: Pod "pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928" satisfied condition "Succeeded or Failed"
Sep  8 04:18:01.820: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep  8 04:18:02.157: INFO: Waiting for pod pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928 to disappear
Sep  8 04:18:02.315: INFO: Pod pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.471 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:02.809: INFO: Only supported for providers [azure] (not aws)
... skipping 45 lines ...
• [SLOW TEST:14.640 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:08.029: INFO: Only supported for providers [azure] (not aws)
... skipping 26 lines ...
Sep  8 04:17:53.931: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-e870e911-3170-420a-aa42-5e84393ce354
STEP: Creating a pod to test consume configMaps
Sep  8 04:17:54.567: INFO: Waiting up to 5m0s for pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87" in namespace "configmap-7331" to be "Succeeded or Failed"
Sep  8 04:17:54.727: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87": Phase="Pending", Reason="", readiness=false. Elapsed: 160.366746ms
Sep  8 04:17:56.885: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318358555s
Sep  8 04:17:59.043: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476433502s
Sep  8 04:18:01.201: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633913213s
Sep  8 04:18:03.359: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.791982239s
Sep  8 04:18:05.516: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87": Phase="Pending", Reason="", readiness=false. Elapsed: 10.949020378s
Sep  8 04:18:07.674: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.106826368s
STEP: Saw pod success
Sep  8 04:18:07.674: INFO: Pod "pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87" satisfied condition "Succeeded or Failed"
Sep  8 04:18:07.830: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:18:08.179: INFO: Waiting for pod pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87 to disappear
Sep  8 04:18:08.336: INFO: Pod pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.515 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:08.843: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 32 lines ...
• [SLOW TEST:15.916 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:09.191: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
STEP: getting the Pod and ensuring that it's patched
STEP: getting the PodStatus
STEP: replacing the Pod's status Ready condition to False
STEP: check the Pod again to ensure its Ready conditions are False
STEP: deleting the Pod via a Collection with a LabelSelector
STEP: watching for the Pod to be deleted
Sep  8 04:18:03.076: INFO: observed event type ERROR
Sep  8 04:18:03.077: FAIL: failed to see DELETED event
Unexpected error:
    <*errors.errorString | 0xc0004d4e60>: {
        s: "watch closed before UntilWithoutRetry timeout",
    }
    watch closed before UntilWithoutRetry timeout
occurred

... skipping 87 lines ...
Sep  8 04:18:06.465: INFO: aws-injector started at 2021-09-08 04:18:00 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:06.465: INFO: 	Container aws-injector ready: false, restart count 0
Sep  8 04:18:06.465: INFO: sysctl-35829d11-d532-48a8-8b97-ea6225cac224 started at 2021-09-08 04:17:54 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:06.465: INFO: 	Container test-container ready: false, restart count 0
Sep  8 04:18:06.465: INFO: netserver-1 started at 2021-09-08 04:17:55 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:06.465: INFO: 	Container webserver ready: false, restart count 0
Sep  8 04:18:06.465: INFO: fail-once-local-q4vph started at 2021-09-08 04:17:55 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:06.465: INFO: 	Container c ready: false, restart count 0
Sep  8 04:18:06.465: INFO: test-webserver-af420e93-32e5-4def-9606-79970f5eef36 started at 2021-09-08 04:17:56 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:06.465: INFO: 	Container test-webserver ready: false, restart count 0
Sep  8 04:18:06.465: INFO: hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-mqrcq started at 2021-09-08 04:17:59 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:06.465: INFO: 	Container agnhost-container ready: false, restart count 0
Sep  8 04:18:06.465: INFO: cilium-gd48d started at 2021-09-08 04:14:17 +0000 UTC (1+1 container statuses recorded)
... skipping 58 lines ...
Sep  8 04:18:08.825: INFO: 	Init container clean-cilium-state ready: true, restart count 0
Sep  8 04:18:08.825: INFO: 	Container cilium-agent ready: true, restart count 0
Sep  8 04:18:08.825: INFO: coredns-5dc785954d-lkrf6 started at 2021-09-08 04:14:44 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:08.825: INFO: 	Container coredns ready: true, restart count 0
Sep  8 04:18:08.825: INFO: coredns-autoscaler-84d4cfd89c-djqzn started at 2021-09-08 04:14:45 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:08.825: INFO: 	Container autoscaler ready: true, restart count 0
Sep  8 04:18:08.825: INFO: fail-once-local-fcrlj started at 2021-09-08 04:17:55 +0000 UTC (0+1 container statuses recorded)
Sep  8 04:18:08.825: INFO: 	Container c ready: false, restart count 1
Sep  8 04:18:08.825: INFO: pod-subpath-test-dynamicpv-47kd started at 2021-09-08 04:18:00 +0000 UTC (2+1 container statuses recorded)
Sep  8 04:18:08.825: INFO: 	Init container init-volume-dynamicpv-47kd ready: false, restart count 0
Sep  8 04:18:08.825: INFO: 	Init container test-init-volume-dynamicpv-47kd ready: false, restart count 0
Sep  8 04:18:08.825: INFO: 	Container test-container-subpath-dynamicpv-47kd ready: false, restart count 0
W0908 04:18:08.990489    4758 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
... skipping 6 lines ...
• Failure [16.797 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should run through the lifecycle of Pods and PodStatus [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep  8 04:18:03.077: failed to see DELETED event
  Unexpected error:
      <*errors.errorString | 0xc0004d4e60>: {
          s: "watch closed before UntilWithoutRetry timeout",
      }
      watch closed before UntilWithoutRetry timeout
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:1045
------------------------------
{"msg":"FAILED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":0,"skipped":5,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 155 lines ...
• [SLOW TEST:18.238 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:11.528: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Creating a kubernetes client
Sep  8 04:17:53.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
W0908 04:17:55.032613    4752 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep  8 04:17:55.032: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:18:11.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7897" for this suite.


• [SLOW TEST:18.823 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:12.183: INFO: Only supported for providers [vsphere] (not aws)
... skipping 169 lines ...
• [SLOW TEST:19.731 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:13.013: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:16.657: INFO: Only supported for providers [vsphere] (not aws)
... skipping 108 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:17.162: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 82 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-4e0ae94a-8c58-4f87-b164-ddf2f5b2941f
STEP: Creating a pod to test consume secrets
Sep  8 04:18:12.244: INFO: Waiting up to 5m0s for pod "pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb" in namespace "secrets-4238" to be "Succeeded or Failed"
Sep  8 04:18:12.406: INFO: Pod "pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb": Phase="Pending", Reason="", readiness=false. Elapsed: 162.010589ms
Sep  8 04:18:14.569: INFO: Pod "pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324623397s
Sep  8 04:18:16.731: INFO: Pod "pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.486773547s
STEP: Saw pod success
Sep  8 04:18:16.731: INFO: Pod "pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb" satisfied condition "Succeeded or Failed"
Sep  8 04:18:16.898: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb container secret-volume-test: <nil>
STEP: delete the pod
Sep  8 04:18:17.236: INFO: Waiting for pod pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb to disappear
Sep  8 04:18:17.398: INFO: Pod pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.622 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":38,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
Sep  8 04:18:09.270: INFO: PersistentVolumeClaim pvc-xzk5m found but phase is Pending instead of Bound.
Sep  8 04:18:11.426: INFO: PersistentVolumeClaim pvc-xzk5m found and phase=Bound (6.630195853s)
Sep  8 04:18:11.427: INFO: Waiting up to 3m0s for PersistentVolume local-gkfbv to have phase Bound
Sep  8 04:18:11.585: INFO: PersistentVolume local-gkfbv found and phase=Bound (158.016257ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bqm9
STEP: Creating a pod to test subpath
Sep  8 04:18:12.056: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bqm9" in namespace "provisioning-2714" to be "Succeeded or Failed"
Sep  8 04:18:12.212: INFO: Pod "pod-subpath-test-preprovisionedpv-bqm9": Phase="Pending", Reason="", readiness=false. Elapsed: 156.198335ms
Sep  8 04:18:14.368: INFO: Pod "pod-subpath-test-preprovisionedpv-bqm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312713807s
Sep  8 04:18:16.525: INFO: Pod "pod-subpath-test-preprovisionedpv-bqm9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469792619s
Sep  8 04:18:18.685: INFO: Pod "pod-subpath-test-preprovisionedpv-bqm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629705965s
STEP: Saw pod success
Sep  8 04:18:18.685: INFO: Pod "pod-subpath-test-preprovisionedpv-bqm9" satisfied condition "Succeeded or Failed"
Sep  8 04:18:18.851: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-bqm9 container test-container-volume-preprovisionedpv-bqm9: <nil>
STEP: delete the pod
Sep  8 04:18:19.184: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bqm9 to disappear
Sep  8 04:18:19.340: INFO: Pod pod-subpath-test-preprovisionedpv-bqm9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bqm9
Sep  8 04:18:19.341: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bqm9" in namespace "provisioning-2714"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:21.695: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 87 lines ...
Sep  8 04:18:16.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Sep  8 04:18:17.697: INFO: Waiting up to 5m0s for pod "var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6" in namespace "var-expansion-2193" to be "Succeeded or Failed"
Sep  8 04:18:17.856: INFO: Pod "var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6": Phase="Pending", Reason="", readiness=false. Elapsed: 158.361943ms
Sep  8 04:18:20.015: INFO: Pod "var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317652881s
Sep  8 04:18:22.178: INFO: Pod "var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.480471606s
STEP: Saw pod success
Sep  8 04:18:22.178: INFO: Pod "var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6" satisfied condition "Succeeded or Failed"
Sep  8 04:18:22.336: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6 container dapi-container: <nil>
STEP: delete the pod
Sep  8 04:18:22.664: INFO: Waiting for pod var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6 to disappear
Sep  8 04:18:22.824: INFO: Pod var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.410 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:23.173: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:18:19.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep  8 04:18:20.164: INFO: Waiting up to 5m0s for pod "security-context-1d162dd3-a0b0-432b-af95-493c57b1e036" in namespace "security-context-3515" to be "Succeeded or Failed"
Sep  8 04:18:20.328: INFO: Pod "security-context-1d162dd3-a0b0-432b-af95-493c57b1e036": Phase="Pending", Reason="", readiness=false. Elapsed: 164.08839ms
Sep  8 04:18:22.489: INFO: Pod "security-context-1d162dd3-a0b0-432b-af95-493c57b1e036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.324819912s
STEP: Saw pod success
Sep  8 04:18:22.489: INFO: Pod "security-context-1d162dd3-a0b0-432b-af95-493c57b1e036" satisfied condition "Succeeded or Failed"
Sep  8 04:18:22.647: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod security-context-1d162dd3-a0b0-432b-af95-493c57b1e036 container test-container: <nil>
STEP: delete the pod
Sep  8 04:18:22.975: INFO: Waiting for pod security-context-1d162dd3-a0b0-432b-af95-493c57b1e036 to disappear
Sep  8 04:18:23.133: INFO: Pod security-context-1d162dd3-a0b0-432b-af95-493c57b1e036 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:18:23.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-3515" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 11 lines ...
Sep  8 04:17:54.218: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5732wwhp7
STEP: creating a claim
Sep  8 04:17:54.382: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-47kd
STEP: Creating a pod to test subpath
Sep  8 04:17:54.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-47kd" in namespace "provisioning-5732" to be "Succeeded or Failed"
Sep  8 04:17:55.042: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 161.103765ms
Sep  8 04:17:57.205: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323717294s
Sep  8 04:17:59.367: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485875045s
Sep  8 04:18:01.530: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.649367126s
Sep  8 04:18:03.694: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.813055803s
Sep  8 04:18:05.862: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.981320694s
Sep  8 04:18:08.024: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.142786589s
Sep  8 04:18:10.185: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.303752983s
Sep  8 04:18:12.346: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.465327427s
Sep  8 04:18:14.507: INFO: Pod "pod-subpath-test-dynamicpv-47kd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.626186338s
STEP: Saw pod success
Sep  8 04:18:14.507: INFO: Pod "pod-subpath-test-dynamicpv-47kd" satisfied condition "Succeeded or Failed"
Sep  8 04:18:14.668: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-47kd container test-container-subpath-dynamicpv-47kd: <nil>
STEP: delete the pod
Sep  8 04:18:15.018: INFO: Waiting for pod pod-subpath-test-dynamicpv-47kd to disappear
Sep  8 04:18:15.178: INFO: Pod pod-subpath-test-dynamicpv-47kd no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-47kd
Sep  8 04:18:15.178: INFO: Deleting pod "pod-subpath-test-dynamicpv-47kd" in namespace "provisioning-5732"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:27.158: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 65 lines ...
Sep  8 04:18:09.443: INFO: PersistentVolumeClaim pvc-mjgtx found but phase is Pending instead of Bound.
Sep  8 04:18:11.607: INFO: PersistentVolumeClaim pvc-mjgtx found and phase=Bound (4.488758225s)
Sep  8 04:18:11.607: INFO: Waiting up to 3m0s for PersistentVolume local-wnhqn to have phase Bound
Sep  8 04:18:11.770: INFO: PersistentVolume local-wnhqn found and phase=Bound (162.631219ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jd5s
STEP: Creating a pod to test subpath
Sep  8 04:18:12.259: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jd5s" in namespace "provisioning-3486" to be "Succeeded or Failed"
Sep  8 04:18:12.422: INFO: Pod "pod-subpath-test-preprovisionedpv-jd5s": Phase="Pending", Reason="", readiness=false. Elapsed: 163.358476ms
Sep  8 04:18:14.586: INFO: Pod "pod-subpath-test-preprovisionedpv-jd5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327397665s
Sep  8 04:18:16.749: INFO: Pod "pod-subpath-test-preprovisionedpv-jd5s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490205664s
Sep  8 04:18:18.918: INFO: Pod "pod-subpath-test-preprovisionedpv-jd5s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659689558s
Sep  8 04:18:21.082: INFO: Pod "pod-subpath-test-preprovisionedpv-jd5s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.823404571s
Sep  8 04:18:23.246: INFO: Pod "pod-subpath-test-preprovisionedpv-jd5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.987405085s
STEP: Saw pod success
Sep  8 04:18:23.246: INFO: Pod "pod-subpath-test-preprovisionedpv-jd5s" satisfied condition "Succeeded or Failed"
Sep  8 04:18:23.409: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-jd5s container test-container-subpath-preprovisionedpv-jd5s: <nil>
STEP: delete the pod
Sep  8 04:18:23.743: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jd5s to disappear
Sep  8 04:18:23.905: INFO: Pod pod-subpath-test-preprovisionedpv-jd5s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jd5s
Sep  8 04:18:23.905: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jd5s" in namespace "provisioning-3486"
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":4,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:29.658: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 68 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:31.040: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 187 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":9,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:33.480: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 150 lines ...
Sep  8 04:18:08.652: INFO: PersistentVolumeClaim pvc-lcg5x found but phase is Pending instead of Bound.
Sep  8 04:18:10.810: INFO: PersistentVolumeClaim pvc-lcg5x found and phase=Bound (4.477966877s)
Sep  8 04:18:10.810: INFO: Waiting up to 3m0s for PersistentVolume local-htchv to have phase Bound
Sep  8 04:18:10.969: INFO: PersistentVolume local-htchv found and phase=Bound (158.808037ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gjp8
STEP: Creating a pod to test atomic-volume-subpath
Sep  8 04:18:11.445: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gjp8" in namespace "provisioning-3851" to be "Succeeded or Failed"
Sep  8 04:18:11.603: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Pending", Reason="", readiness=false. Elapsed: 157.835719ms
Sep  8 04:18:13.761: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315819363s
Sep  8 04:18:15.918: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472852082s
Sep  8 04:18:18.077: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 6.631972332s
Sep  8 04:18:20.236: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 8.790850956s
Sep  8 04:18:22.397: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 10.951447303s
Sep  8 04:18:24.555: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 13.10949131s
Sep  8 04:18:26.713: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 15.267436813s
Sep  8 04:18:28.869: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 17.424146932s
Sep  8 04:18:31.026: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 19.581316914s
Sep  8 04:18:33.184: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Running", Reason="", readiness=true. Elapsed: 21.739295919s
Sep  8 04:18:35.342: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.897065054s
STEP: Saw pod success
Sep  8 04:18:35.342: INFO: Pod "pod-subpath-test-preprovisionedpv-gjp8" satisfied condition "Succeeded or Failed"
Sep  8 04:18:35.499: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-gjp8 container test-container-subpath-preprovisionedpv-gjp8: <nil>
STEP: delete the pod
Sep  8 04:18:35.824: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gjp8 to disappear
Sep  8 04:18:35.980: INFO: Pod pod-subpath-test-preprovisionedpv-gjp8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gjp8
Sep  8 04:18:35.980: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gjp8" in namespace "provisioning-3851"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:38.201: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:41.039: INFO: Only supported for providers [azure] (not aws)
... skipping 78 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":2,"skipped":18,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:41.695: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:19.388 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
Sep  8 04:18:03.627: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-2686wv7km
STEP: creating a claim
Sep  8 04:18:03.787: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-z6hx
STEP: Creating a pod to test subpath
Sep  8 04:18:04.277: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-z6hx" in namespace "provisioning-2686" to be "Succeeded or Failed"
Sep  8 04:18:04.435: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 158.115206ms
Sep  8 04:18:06.594: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316884195s
Sep  8 04:18:08.754: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476965032s
Sep  8 04:18:10.914: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636451806s
Sep  8 04:18:13.074: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796912145s
Sep  8 04:18:15.233: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.955383894s
Sep  8 04:18:17.392: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 13.114698679s
Sep  8 04:18:19.554: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 15.276972997s
Sep  8 04:18:21.713: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.436230889s
Sep  8 04:18:23.874: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.597046199s
Sep  8 04:18:26.033: INFO: Pod "pod-subpath-test-dynamicpv-z6hx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.755629016s
STEP: Saw pod success
Sep  8 04:18:26.033: INFO: Pod "pod-subpath-test-dynamicpv-z6hx" satisfied condition "Succeeded or Failed"
Sep  8 04:18:26.199: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-z6hx container test-container-volume-dynamicpv-z6hx: <nil>
STEP: delete the pod
Sep  8 04:18:26.530: INFO: Waiting for pod pod-subpath-test-dynamicpv-z6hx to disappear
Sep  8 04:18:26.688: INFO: Pod pod-subpath-test-dynamicpv-z6hx no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-z6hx
Sep  8 04:18:26.688: INFO: Deleting pod "pod-subpath-test-dynamicpv-z6hx" in namespace "provisioning-2686"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:51.431 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 120 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":3,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:47.916: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 67 lines ...
Sep  8 04:18:38.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Sep  8 04:18:39.176: INFO: Waiting up to 5m0s for pod "pod-79b83de0-b497-4daf-968e-930d7015a14a" in namespace "emptydir-1160" to be "Succeeded or Failed"
Sep  8 04:18:39.333: INFO: Pod "pod-79b83de0-b497-4daf-968e-930d7015a14a": Phase="Pending", Reason="", readiness=false. Elapsed: 157.001695ms
Sep  8 04:18:41.490: INFO: Pod "pod-79b83de0-b497-4daf-968e-930d7015a14a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313792157s
Sep  8 04:18:43.648: INFO: Pod "pod-79b83de0-b497-4daf-968e-930d7015a14a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471713678s
Sep  8 04:18:45.805: INFO: Pod "pod-79b83de0-b497-4daf-968e-930d7015a14a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.628892362s
Sep  8 04:18:47.963: INFO: Pod "pod-79b83de0-b497-4daf-968e-930d7015a14a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.787146392s
STEP: Saw pod success
Sep  8 04:18:47.964: INFO: Pod "pod-79b83de0-b497-4daf-968e-930d7015a14a" satisfied condition "Succeeded or Failed"
Sep  8 04:18:48.123: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-79b83de0-b497-4daf-968e-930d7015a14a container test-container: <nil>
STEP: delete the pod
Sep  8 04:18:48.445: INFO: Waiting for pod pod-79b83de0-b497-4daf-968e-930d7015a14a to disappear
Sep  8 04:18:48.603: INFO: Pod pod-79b83de0-b497-4daf-968e-930d7015a14a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.723 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:48.944: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 96 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep  8 04:18:43.701: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:18:43.860: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jwpb
STEP: Creating a pod to test subpath
Sep  8 04:18:44.020: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jwpb" in namespace "provisioning-1864" to be "Succeeded or Failed"
Sep  8 04:18:44.179: INFO: Pod "pod-subpath-test-inlinevolume-jwpb": Phase="Pending", Reason="", readiness=false. Elapsed: 158.507426ms
Sep  8 04:18:46.340: INFO: Pod "pod-subpath-test-inlinevolume-jwpb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319621935s
Sep  8 04:18:48.500: INFO: Pod "pod-subpath-test-inlinevolume-jwpb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.479501084s
STEP: Saw pod success
Sep  8 04:18:48.500: INFO: Pod "pod-subpath-test-inlinevolume-jwpb" satisfied condition "Succeeded or Failed"
Sep  8 04:18:48.658: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-jwpb container test-container-volume-inlinevolume-jwpb: <nil>
STEP: delete the pod
Sep  8 04:18:48.992: INFO: Waiting for pod pod-subpath-test-inlinevolume-jwpb to disappear
Sep  8 04:18:49.151: INFO: Pod pod-subpath-test-inlinevolume-jwpb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jwpb
Sep  8 04:18:49.151: INFO: Deleting pod "pod-subpath-test-inlinevolume-jwpb" in namespace "provisioning-1864"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":21,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:49.829: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399

      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:18:28.265: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Sep  8 04:18:37.621: INFO: PersistentVolumeClaim pvc-295ft found but phase is Pending instead of Bound.
Sep  8 04:18:39.786: INFO: PersistentVolumeClaim pvc-295ft found and phase=Bound (2.326721756s)
Sep  8 04:18:39.786: INFO: Waiting up to 3m0s for PersistentVolume local-j6nrk to have phase Bound
Sep  8 04:18:39.951: INFO: PersistentVolume local-j6nrk found and phase=Bound (164.817043ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nfgp
STEP: Creating a pod to test subpath
Sep  8 04:18:40.447: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nfgp" in namespace "provisioning-7128" to be "Succeeded or Failed"
Sep  8 04:18:40.609: INFO: Pod "pod-subpath-test-preprovisionedpv-nfgp": Phase="Pending", Reason="", readiness=false. Elapsed: 162.456139ms
Sep  8 04:18:42.775: INFO: Pod "pod-subpath-test-preprovisionedpv-nfgp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328617477s
Sep  8 04:18:44.939: INFO: Pod "pod-subpath-test-preprovisionedpv-nfgp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49237856s
Sep  8 04:18:47.103: INFO: Pod "pod-subpath-test-preprovisionedpv-nfgp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.656058253s
STEP: Saw pod success
Sep  8 04:18:47.103: INFO: Pod "pod-subpath-test-preprovisionedpv-nfgp" satisfied condition "Succeeded or Failed"
Sep  8 04:18:47.265: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-nfgp container test-container-subpath-preprovisionedpv-nfgp: <nil>
STEP: delete the pod
Sep  8 04:18:47.618: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nfgp to disappear
Sep  8 04:18:47.780: INFO: Pod pod-subpath-test-preprovisionedpv-nfgp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nfgp
Sep  8 04:18:47.780: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nfgp" in namespace "provisioning-7128"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:18:41.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Sep  8 04:18:42.007: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f" in namespace "security-context-test-3514" to be "Succeeded or Failed"
Sep  8 04:18:42.166: INFO: Pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f": Phase="Pending", Reason="", readiness=false. Elapsed: 158.671686ms
Sep  8 04:18:44.325: INFO: Pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317732219s
Sep  8 04:18:46.503: INFO: Pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495450672s
Sep  8 04:18:48.663: INFO: Pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655707668s
Sep  8 04:18:50.823: INFO: Pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.815721654s
Sep  8 04:18:52.982: INFO: Pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.975068555s
Sep  8 04:18:52.982: INFO: Pod "alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:18:53.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3514" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:53.490: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":25,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:18:43.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  8 04:18:44.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94" in namespace "downward-api-3043" to be "Succeeded or Failed"
Sep  8 04:18:44.190: INFO: Pod "downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94": Phase="Pending", Reason="", readiness=false. Elapsed: 161.86463ms
Sep  8 04:18:46.358: INFO: Pod "downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329659057s
Sep  8 04:18:48.521: INFO: Pod "downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492947586s
Sep  8 04:18:50.684: INFO: Pod "downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655682476s
Sep  8 04:18:52.849: INFO: Pod "downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.820837209s
STEP: Saw pod success
Sep  8 04:18:52.849: INFO: Pod "downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94" satisfied condition "Succeeded or Failed"
Sep  8 04:18:53.012: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94 container client-container: <nil>
STEP: delete the pod
Sep  8 04:18:53.348: INFO: Waiting for pod downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94 to disappear
Sep  8 04:18:53.510: INFO: Pod downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.797 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:53.853: INFO: Driver "local" does not provide raw block - skipping
... skipping 44 lines ...
Sep  8 04:18:23.109: INFO: PersistentVolumeClaim pvc-p7j92 found but phase is Pending instead of Bound.
Sep  8 04:18:25.268: INFO: PersistentVolumeClaim pvc-p7j92 found and phase=Bound (15.288737459s)
Sep  8 04:18:25.269: INFO: Waiting up to 3m0s for PersistentVolume local-mb8lg to have phase Bound
Sep  8 04:18:25.428: INFO: PersistentVolume local-mb8lg found and phase=Bound (158.967828ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4g92
STEP: Creating a pod to test atomic-volume-subpath
Sep  8 04:18:25.907: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4g92" in namespace "provisioning-3492" to be "Succeeded or Failed"
Sep  8 04:18:26.066: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Pending", Reason="", readiness=false. Elapsed: 158.997804ms
Sep  8 04:18:28.227: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320256109s
Sep  8 04:18:30.386: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 4.479503483s
Sep  8 04:18:32.547: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 6.640013995s
Sep  8 04:18:34.708: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 8.800595155s
Sep  8 04:18:36.867: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 10.960180291s
... skipping 2 lines ...
Sep  8 04:18:43.350: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 17.44288295s
Sep  8 04:18:45.518: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 19.611506287s
Sep  8 04:18:47.678: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 21.770818736s
Sep  8 04:18:49.842: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Running", Reason="", readiness=true. Elapsed: 23.935400616s
Sep  8 04:18:52.003: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.096179122s
STEP: Saw pod success
Sep  8 04:18:52.003: INFO: Pod "pod-subpath-test-preprovisionedpv-4g92" satisfied condition "Succeeded or Failed"
Sep  8 04:18:52.162: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-4g92 container test-container-subpath-preprovisionedpv-4g92: <nil>
STEP: delete the pod
Sep  8 04:18:52.494: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4g92 to disappear
Sep  8 04:18:52.656: INFO: Pod pod-subpath-test-preprovisionedpv-4g92 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4g92
Sep  8 04:18:52.656: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4g92" in namespace "provisioning-3492"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:55.358: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  8 04:18:45.750: INFO: Waiting up to 5m0s for pod "downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377" in namespace "projected-9513" to be "Succeeded or Failed"
Sep  8 04:18:45.912: INFO: Pod "downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377": Phase="Pending", Reason="", readiness=false. Elapsed: 161.634967ms
Sep  8 04:18:48.072: INFO: Pod "downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321699752s
Sep  8 04:18:50.231: INFO: Pod "downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480844618s
Sep  8 04:18:52.391: INFO: Pod "downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64093517s
Sep  8 04:18:54.550: INFO: Pod "downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.80020352s
STEP: Saw pod success
Sep  8 04:18:54.550: INFO: Pod "downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377" satisfied condition "Succeeded or Failed"
Sep  8 04:18:54.709: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377 container client-container: <nil>
STEP: delete the pod
Sep  8 04:18:55.066: INFO: Waiting for pod downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377 to disappear
Sep  8 04:18:55.225: INFO: Pod downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.922 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:18:55.720: INFO: Only supported for providers [vsphere] (not aws)
... skipping 48 lines ...
Sep  8 04:18:21.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Sep  8 04:18:22.575: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:18:22.896: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5491" in namespace "provisioning-5491" to be "Succeeded or Failed"
Sep  8 04:18:23.052: INFO: Pod "hostpath-symlink-prep-provisioning-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 156.083887ms
Sep  8 04:18:25.209: INFO: Pod "hostpath-symlink-prep-provisioning-5491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.313428835s
STEP: Saw pod success
Sep  8 04:18:25.210: INFO: Pod "hostpath-symlink-prep-provisioning-5491" satisfied condition "Succeeded or Failed"
Sep  8 04:18:25.210: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5491" in namespace "provisioning-5491"
Sep  8 04:18:25.370: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5491" to be fully deleted
Sep  8 04:18:25.526: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-4cvj
STEP: Creating a pod to test atomic-volume-subpath
Sep  8 04:18:25.685: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4cvj" in namespace "provisioning-5491" to be "Succeeded or Failed"
Sep  8 04:18:25.841: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Pending", Reason="", readiness=false. Elapsed: 156.309675ms
Sep  8 04:18:27.998: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313818966s
Sep  8 04:18:30.163: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 4.477903846s
Sep  8 04:18:32.320: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 6.634912182s
Sep  8 04:18:34.477: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 8.792475323s
Sep  8 04:18:36.634: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 10.949776801s
... skipping 2 lines ...
Sep  8 04:18:43.106: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 17.42123333s
Sep  8 04:18:45.263: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 19.578689204s
Sep  8 04:18:47.420: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 21.735189641s
Sep  8 04:18:49.578: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Running", Reason="", readiness=true. Elapsed: 23.89285155s
Sep  8 04:18:51.734: INFO: Pod "pod-subpath-test-inlinevolume-4cvj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.049289574s
STEP: Saw pod success
Sep  8 04:18:51.734: INFO: Pod "pod-subpath-test-inlinevolume-4cvj" satisfied condition "Succeeded or Failed"
Sep  8 04:18:51.890: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-4cvj container test-container-subpath-inlinevolume-4cvj: <nil>
STEP: delete the pod
Sep  8 04:18:52.212: INFO: Waiting for pod pod-subpath-test-inlinevolume-4cvj to disappear
Sep  8 04:18:52.368: INFO: Pod pod-subpath-test-inlinevolume-4cvj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-4cvj
Sep  8 04:18:52.368: INFO: Deleting pod "pod-subpath-test-inlinevolume-4cvj" in namespace "provisioning-5491"
STEP: Deleting pod
Sep  8 04:18:52.524: INFO: Deleting pod "pod-subpath-test-inlinevolume-4cvj" in namespace "provisioning-5491"
Sep  8 04:18:52.843: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5491" in namespace "provisioning-5491" to be "Succeeded or Failed"
Sep  8 04:18:53.000: INFO: Pod "hostpath-symlink-prep-provisioning-5491": Phase="Pending", Reason="", readiness=false. Elapsed: 157.537283ms
Sep  8 04:18:55.157: INFO: Pod "hostpath-symlink-prep-provisioning-5491": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.314243787s
STEP: Saw pod success
Sep  8 04:18:55.157: INFO: Pod "hostpath-symlink-prep-provisioning-5491" satisfied condition "Succeeded or Failed"
Sep  8 04:18:55.157: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5491" in namespace "provisioning-5491"
Sep  8 04:18:55.317: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5491" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:18:55.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5491" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:165
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":2,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:01.429: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 112 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep  8 04:18:49.754: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:18:49.912: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-n24j
STEP: Creating a pod to test subpath
Sep  8 04:18:50.076: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-n24j" in namespace "provisioning-9039" to be "Succeeded or Failed"
Sep  8 04:18:50.232: INFO: Pod "pod-subpath-test-inlinevolume-n24j": Phase="Pending", Reason="", readiness=false. Elapsed: 156.349582ms
Sep  8 04:18:52.389: INFO: Pod "pod-subpath-test-inlinevolume-n24j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313602245s
Sep  8 04:18:54.547: INFO: Pod "pod-subpath-test-inlinevolume-n24j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471600851s
Sep  8 04:18:56.706: INFO: Pod "pod-subpath-test-inlinevolume-n24j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630497583s
Sep  8 04:18:58.864: INFO: Pod "pod-subpath-test-inlinevolume-n24j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.787823265s
Sep  8 04:19:01.021: INFO: Pod "pod-subpath-test-inlinevolume-n24j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.94549911s
STEP: Saw pod success
Sep  8 04:19:01.021: INFO: Pod "pod-subpath-test-inlinevolume-n24j" satisfied condition "Succeeded or Failed"
Sep  8 04:19:01.178: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-n24j container test-container-subpath-inlinevolume-n24j: <nil>
STEP: delete the pod
Sep  8 04:19:01.506: INFO: Waiting for pod pod-subpath-test-inlinevolume-n24j to disappear
Sep  8 04:19:01.665: INFO: Pod pod-subpath-test-inlinevolume-n24j no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-n24j
Sep  8 04:19:01.665: INFO: Deleting pod "pod-subpath-test-inlinevolume-n24j" in namespace "provisioning-9039"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":22,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:03.465: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 83 lines ...
Sep  8 04:18:55.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Sep  8 04:18:56.760: INFO: Waiting up to 5m0s for pod "security-context-f736fadd-e91c-425c-ae88-cda82e063a0d" in namespace "security-context-6447" to be "Succeeded or Failed"
Sep  8 04:18:56.918: INFO: Pod "security-context-f736fadd-e91c-425c-ae88-cda82e063a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 158.394629ms
Sep  8 04:18:59.075: INFO: Pod "security-context-f736fadd-e91c-425c-ae88-cda82e063a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314979643s
Sep  8 04:19:01.232: INFO: Pod "security-context-f736fadd-e91c-425c-ae88-cda82e063a0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471884797s
Sep  8 04:19:03.389: INFO: Pod "security-context-f736fadd-e91c-425c-ae88-cda82e063a0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629360498s
STEP: Saw pod success
Sep  8 04:19:03.389: INFO: Pod "security-context-f736fadd-e91c-425c-ae88-cda82e063a0d" satisfied condition "Succeeded or Failed"
Sep  8 04:19:03.548: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod security-context-f736fadd-e91c-425c-ae88-cda82e063a0d container test-container: <nil>
STEP: delete the pod
Sep  8 04:19:03.869: INFO: Waiting for pod security-context-f736fadd-e91c-425c-ae88-cda82e063a0d to disappear
Sep  8 04:19:04.025: INFO: Pod security-context-f736fadd-e91c-425c-ae88-cda82e063a0d no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.519 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:04.350: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 192 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:19:05.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1760" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Sep  8 04:18:37.540: INFO: PersistentVolumeClaim pvc-xcqz9 found but phase is Pending instead of Bound.
Sep  8 04:18:39.700: INFO: PersistentVolumeClaim pvc-xcqz9 found and phase=Bound (4.476760592s)
Sep  8 04:18:39.700: INFO: Waiting up to 3m0s for PersistentVolume local-ft6qb to have phase Bound
Sep  8 04:18:39.861: INFO: PersistentVolume local-ft6qb found and phase=Bound (161.099679ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-x9jv
STEP: Creating a pod to test subpath
Sep  8 04:18:40.338: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-x9jv" in namespace "provisioning-8733" to be "Succeeded or Failed"
Sep  8 04:18:40.496: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 158.0867ms
Sep  8 04:18:42.655: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317240677s
Sep  8 04:18:44.814: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476037124s
Sep  8 04:18:46.973: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63507747s
Sep  8 04:18:49.135: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797797352s
Sep  8 04:18:51.296: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.958149117s
Sep  8 04:18:53.454: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.11674751s
STEP: Saw pod success
Sep  8 04:18:53.454: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv" satisfied condition "Succeeded or Failed"
Sep  8 04:18:53.612: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-x9jv container test-container-subpath-preprovisionedpv-x9jv: <nil>
STEP: delete the pod
Sep  8 04:18:53.946: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-x9jv to disappear
Sep  8 04:18:54.115: INFO: Pod pod-subpath-test-preprovisionedpv-x9jv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-x9jv
Sep  8 04:18:54.115: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-x9jv" in namespace "provisioning-8733"
STEP: Creating pod pod-subpath-test-preprovisionedpv-x9jv
STEP: Creating a pod to test subpath
Sep  8 04:18:54.432: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-x9jv" in namespace "provisioning-8733" to be "Succeeded or Failed"
Sep  8 04:18:54.589: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 157.665708ms
Sep  8 04:18:56.748: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316068323s
Sep  8 04:18:58.907: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47544446s
Sep  8 04:19:01.066: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634197804s
Sep  8 04:19:03.225: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.793707001s
STEP: Saw pod success
Sep  8 04:19:03.226: INFO: Pod "pod-subpath-test-preprovisionedpv-x9jv" satisfied condition "Succeeded or Failed"
Sep  8 04:19:03.384: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-x9jv container test-container-subpath-preprovisionedpv-x9jv: <nil>
STEP: delete the pod
Sep  8 04:19:03.709: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-x9jv to disappear
Sep  8 04:19:03.869: INFO: Pod pod-subpath-test-preprovisionedpv-x9jv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-x9jv
Sep  8 04:19:03.869: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-x9jv" in namespace "provisioning-8733"
... skipping 33 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep  8 04:18:56.178: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep  8 04:18:56.178: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-v97n
STEP: Creating a pod to test subpath
Sep  8 04:18:56.341: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-v97n" in namespace "provisioning-661" to be "Succeeded or Failed"
Sep  8 04:18:56.500: INFO: Pod "pod-subpath-test-inlinevolume-v97n": Phase="Pending", Reason="", readiness=false. Elapsed: 159.111691ms
Sep  8 04:18:58.661: INFO: Pod "pod-subpath-test-inlinevolume-v97n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31967176s
Sep  8 04:19:00.834: INFO: Pod "pod-subpath-test-inlinevolume-v97n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493314537s
Sep  8 04:19:02.995: INFO: Pod "pod-subpath-test-inlinevolume-v97n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.65385867s
Sep  8 04:19:05.155: INFO: Pod "pod-subpath-test-inlinevolume-v97n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.81424219s
STEP: Saw pod success
Sep  8 04:19:05.155: INFO: Pod "pod-subpath-test-inlinevolume-v97n" satisfied condition "Succeeded or Failed"
Sep  8 04:19:05.322: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-v97n container test-container-volume-inlinevolume-v97n: <nil>
STEP: delete the pod
Sep  8 04:19:05.651: INFO: Waiting for pod pod-subpath-test-inlinevolume-v97n to disappear
Sep  8 04:19:05.810: INFO: Pod pod-subpath-test-inlinevolume-v97n no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-v97n
Sep  8 04:19:05.810: INFO: Deleting pod "pod-subpath-test-inlinevolume-v97n" in namespace "provisioning-661"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":21,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:06.499: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 21 lines ...
Sep  8 04:18:53.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep  8 04:18:54.884: INFO: Waiting up to 5m0s for pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01" in namespace "security-context-3642" to be "Succeeded or Failed"
Sep  8 04:18:55.045: INFO: Pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01": Phase="Pending", Reason="", readiness=false. Elapsed: 161.849517ms
Sep  8 04:18:57.209: INFO: Pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324973686s
Sep  8 04:18:59.372: INFO: Pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488386028s
Sep  8 04:19:01.535: INFO: Pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.651228128s
Sep  8 04:19:03.699: INFO: Pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.815923945s
Sep  8 04:19:05.867: INFO: Pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.983139328s
STEP: Saw pod success
Sep  8 04:19:05.867: INFO: Pod "security-context-c14b06d3-8183-46a6-a375-29244f348f01" satisfied condition "Succeeded or Failed"
Sep  8 04:19:06.030: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod security-context-c14b06d3-8183-46a6-a375-29244f348f01 container test-container: <nil>
STEP: delete the pod
Sep  8 04:19:06.363: INFO: Waiting for pod security-context-c14b06d3-8183-46a6-a375-29244f348f01 to disappear
Sep  8 04:19:06.525: INFO: Pod security-context-c14b06d3-8183-46a6-a375-29244f348f01 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.980 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":4,"skipped":32,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:06.977: INFO: Only supported for providers [vsphere] (not aws)
... skipping 103 lines ...
Sep  8 04:18:06.029: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Sep  8 04:18:08.027: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Sep  8 04:18:08.187: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1947 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Sep  8 04:18:09.803: INFO: rc: 7
Sep  8 04:18:09.965: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep  8 04:18:10.126: INFO: Pod kube-proxy-mode-detector no longer exists
Sep  8 04:18:10.126: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1947 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating service affinity-nodeport-timeout in namespace services-1947
STEP: creating replication controller affinity-nodeport-timeout in namespace services-1947
I0908 04:18:10.462683    4822 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1947, replica count: 3
I0908 04:18:13.664454    4822 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0908 04:18:16.666254    4822 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
... skipping 50 lines ...
• [SLOW TEST:74.522 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:14.499 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:10.276: INFO: Only supported for providers [gce gke] (not aws)
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:10.817: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
Sep  8 04:18:39.285: INFO: PersistentVolumeClaim pvc-vknth found but phase is Pending instead of Bound.
Sep  8 04:18:41.446: INFO: PersistentVolumeClaim pvc-vknth found and phase=Bound (4.479812922s)
Sep  8 04:18:41.446: INFO: Waiting up to 3m0s for PersistentVolume local-69pbn to have phase Bound
Sep  8 04:18:41.605: INFO: PersistentVolume local-69pbn found and phase=Bound (158.63465ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9knv
STEP: Creating a pod to test atomic-volume-subpath
Sep  8 04:18:42.084: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9knv" in namespace "provisioning-9275" to be "Succeeded or Failed"
Sep  8 04:18:42.245: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Pending", Reason="", readiness=false. Elapsed: 160.19212ms
Sep  8 04:18:44.405: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320313286s
Sep  8 04:18:46.565: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 4.480560637s
Sep  8 04:18:48.726: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 6.641531889s
Sep  8 04:18:50.888: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 8.803642681s
Sep  8 04:18:53.053: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 10.969031335s
Sep  8 04:18:55.213: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 13.128180741s
Sep  8 04:18:57.372: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 15.287575562s
Sep  8 04:18:59.531: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 17.447105043s
Sep  8 04:19:01.691: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 19.607102583s
Sep  8 04:19:03.852: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Running", Reason="", readiness=true. Elapsed: 21.767844613s
Sep  8 04:19:06.014: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.930097845s
STEP: Saw pod success
Sep  8 04:19:06.015: INFO: Pod "pod-subpath-test-preprovisionedpv-9knv" satisfied condition "Succeeded or Failed"
Sep  8 04:19:06.173: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-9knv container test-container-subpath-preprovisionedpv-9knv: <nil>
STEP: delete the pod
Sep  8 04:19:06.509: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9knv to disappear
Sep  8 04:19:06.672: INFO: Pod pod-subpath-test-preprovisionedpv-9knv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9knv
Sep  8 04:19:06.672: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9knv" in namespace "provisioning-9275"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:18:45.862: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Sep  8 04:18:53.284: INFO: PersistentVolumeClaim pvc-xkvxg found but phase is Pending instead of Bound.
Sep  8 04:18:55.443: INFO: PersistentVolumeClaim pvc-xkvxg found and phase=Bound (2.317081272s)
Sep  8 04:18:55.443: INFO: Waiting up to 3m0s for PersistentVolume local-2mqsr to have phase Bound
Sep  8 04:18:55.601: INFO: PersistentVolume local-2mqsr found and phase=Bound (157.808698ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gqjn
STEP: Creating a pod to test subpath
Sep  8 04:18:56.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gqjn" in namespace "provisioning-9497" to be "Succeeded or Failed"
Sep  8 04:18:56.238: INFO: Pod "pod-subpath-test-preprovisionedpv-gqjn": Phase="Pending", Reason="", readiness=false. Elapsed: 160.205181ms
Sep  8 04:18:58.397: INFO: Pod "pod-subpath-test-preprovisionedpv-gqjn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319322775s
Sep  8 04:19:00.556: INFO: Pod "pod-subpath-test-preprovisionedpv-gqjn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478151545s
Sep  8 04:19:02.715: INFO: Pod "pod-subpath-test-preprovisionedpv-gqjn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637640899s
Sep  8 04:19:04.874: INFO: Pod "pod-subpath-test-preprovisionedpv-gqjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.796882463s
STEP: Saw pod success
Sep  8 04:19:04.874: INFO: Pod "pod-subpath-test-preprovisionedpv-gqjn" satisfied condition "Succeeded or Failed"
Sep  8 04:19:05.038: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-gqjn container test-container-volume-preprovisionedpv-gqjn: <nil>
STEP: delete the pod
Sep  8 04:19:05.369: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gqjn to disappear
Sep  8 04:19:05.527: INFO: Pod pod-subpath-test-preprovisionedpv-gqjn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gqjn
Sep  8 04:19:05.527: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gqjn" in namespace "provisioning-9497"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:11.198: INFO: Driver hostPath doesn't support ext3 -- skipping
... skipping 124 lines ...
Sep  8 04:17:54.241: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7273hk7kn
STEP: creating a claim
Sep  8 04:17:54.399: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-7mxm
STEP: Creating a pod to test subpath
Sep  8 04:17:54.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7mxm" in namespace "provisioning-7273" to be "Succeeded or Failed"
Sep  8 04:17:55.038: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 156.028635ms
Sep  8 04:17:57.198: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316424876s
Sep  8 04:17:59.354: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472966925s
Sep  8 04:18:01.512: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630397452s
Sep  8 04:18:03.671: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.789106111s
Sep  8 04:18:05.827: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.945712827s
... skipping 5 lines ...
Sep  8 04:18:18.785: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 23.903239952s
Sep  8 04:18:20.942: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 26.060830334s
Sep  8 04:18:23.099: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 28.217323479s
Sep  8 04:18:25.256: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 30.374393823s
Sep  8 04:18:27.412: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.530705484s
STEP: Saw pod success
Sep  8 04:18:27.412: INFO: Pod "pod-subpath-test-dynamicpv-7mxm" satisfied condition "Succeeded or Failed"
Sep  8 04:18:27.568: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-7mxm container test-container-subpath-dynamicpv-7mxm: <nil>
STEP: delete the pod
Sep  8 04:18:27.890: INFO: Waiting for pod pod-subpath-test-dynamicpv-7mxm to disappear
Sep  8 04:18:28.046: INFO: Pod pod-subpath-test-dynamicpv-7mxm no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7mxm
Sep  8 04:18:28.046: INFO: Deleting pod "pod-subpath-test-dynamicpv-7mxm" in namespace "provisioning-7273"
STEP: Creating pod pod-subpath-test-dynamicpv-7mxm
STEP: Creating a pod to test subpath
Sep  8 04:18:28.364: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7mxm" in namespace "provisioning-7273" to be "Succeeded or Failed"
Sep  8 04:18:28.521: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 156.771508ms
Sep  8 04:18:30.677: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313248423s
Sep  8 04:18:32.838: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474041918s
Sep  8 04:18:34.999: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635146475s
Sep  8 04:18:37.156: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.792457887s
Sep  8 04:18:39.313: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.948646681s
... skipping 2 lines ...
Sep  8 04:18:45.784: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 17.420290233s
Sep  8 04:18:47.942: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 19.578128312s
Sep  8 04:18:50.100: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 21.736480291s
Sep  8 04:18:52.258: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Pending", Reason="", readiness=false. Elapsed: 23.89418367s
Sep  8 04:18:54.416: INFO: Pod "pod-subpath-test-dynamicpv-7mxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.051602541s
STEP: Saw pod success
Sep  8 04:18:54.416: INFO: Pod "pod-subpath-test-dynamicpv-7mxm" satisfied condition "Succeeded or Failed"
Sep  8 04:18:54.572: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-7mxm container test-container-subpath-dynamicpv-7mxm: <nil>
STEP: delete the pod
Sep  8 04:18:54.900: INFO: Waiting for pod pod-subpath-test-dynamicpv-7mxm to disappear
Sep  8 04:18:55.056: INFO: Pod pod-subpath-test-dynamicpv-7mxm no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7mxm
Sep  8 04:18:55.056: INFO: Deleting pod "pod-subpath-test-dynamicpv-7mxm" in namespace "provisioning-7273"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:12.126: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 144 lines ...
• [SLOW TEST:9.336 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":4,"skipped":28,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:13.737: INFO: Only supported for providers [gce gke] (not aws)
... skipping 52 lines ...
STEP: Wait for the deployment to be ready
Sep  8 04:19:08.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  8 04:19:10.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671548, loc:(*time.Location)(0x9de2b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep  8 04:19:14.080: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:19:15.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6252" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:10.395 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:11.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Sep  8 04:19:12.260: INFO: Waiting up to 5m0s for pod "busybox-user-0-8fb94c11-59b3-4ec6-adc3-ef3785ae663c" in namespace "security-context-test-2745" to be "Succeeded or Failed"
Sep  8 04:19:12.419: INFO: Pod "busybox-user-0-8fb94c11-59b3-4ec6-adc3-ef3785ae663c": Phase="Pending", Reason="", readiness=false. Elapsed: 158.722904ms
Sep  8 04:19:14.579: INFO: Pod "busybox-user-0-8fb94c11-59b3-4ec6-adc3-ef3785ae663c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318874598s
Sep  8 04:19:16.738: INFO: Pod "busybox-user-0-8fb94c11-59b3-4ec6-adc3-ef3785ae663c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.478287624s
Sep  8 04:19:16.738: INFO: Pod "busybox-user-0-8fb94c11-59b3-4ec6-adc3-ef3785ae663c" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:19:16.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2745" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:17.076: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-e14a2e41-5a9d-48a8-9816-761683c4ea4a
STEP: Creating a pod to test consume configMaps
Sep  8 04:19:11.955: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93" in namespace "projected-2496" to be "Succeeded or Failed"
Sep  8 04:19:12.115: INFO: Pod "pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93": Phase="Pending", Reason="", readiness=false. Elapsed: 159.921838ms
Sep  8 04:19:14.278: INFO: Pod "pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322306502s
Sep  8 04:19:16.439: INFO: Pod "pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.483495091s
STEP: Saw pod success
Sep  8 04:19:16.439: INFO: Pod "pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93" satisfied condition "Succeeded or Failed"
Sep  8 04:19:16.599: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:19:16.927: INFO: Waiting for pod pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93 to disappear
Sep  8 04:19:17.086: INFO: Pod pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.581 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":9,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:06.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Sep  8 04:19:12.800: INFO: Creating a PV followed by a PVC
Sep  8 04:19:13.134: INFO: Waiting for PV local-pvppvql to bind to PVC pvc-p7jpp
Sep  8 04:19:13.134: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-p7jpp] to have phase Bound
Sep  8 04:19:13.292: INFO: PersistentVolumeClaim pvc-p7jpp found and phase=Bound (157.407151ms)
Sep  8 04:19:13.292: INFO: Waiting up to 3m0s for PersistentVolume local-pvppvql to have phase Bound
Sep  8 04:19:13.452: INFO: PersistentVolume local-pvppvql found and phase=Bound (160.146959ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
STEP: Initializing test volumes
Sep  8 04:19:13.784: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-427f92de-a9c8-4427-85ab-aa303a994c27] Namespace:persistent-local-volumes-test-6839 PodName:hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-gwm5l ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep  8 04:19:13.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:11.882 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:17.989: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
Sep  8 04:18:36.695: INFO: PersistentVolumeClaim pvc-k96pt found and phase=Bound (163.248471ms)
Sep  8 04:18:36.695: INFO: Waiting up to 3m0s for PersistentVolume nfs-ss86z to have phase Bound
Sep  8 04:18:36.857: INFO: PersistentVolume nfs-ss86z found and phase=Bound (161.672954ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
Sep  8 04:18:37.344: INFO: Waiting up to 5m0s for pod "pvc-tester-xfnj4" in namespace "pv-7460" to be "Succeeded or Failed"
Sep  8 04:18:37.505: INFO: Pod "pvc-tester-xfnj4": Phase="Pending", Reason="", readiness=false. Elapsed: 161.570857ms
Sep  8 04:18:39.670: INFO: Pod "pvc-tester-xfnj4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325941111s
Sep  8 04:18:41.833: INFO: Pod "pvc-tester-xfnj4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488803296s
Sep  8 04:18:43.996: INFO: Pod "pvc-tester-xfnj4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.652060118s
STEP: Saw pod success
Sep  8 04:18:43.996: INFO: Pod "pvc-tester-xfnj4" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
Sep  8 04:18:43.996: INFO: Deleting pod "pvc-tester-xfnj4" in namespace "pv-7460"
Sep  8 04:18:44.167: INFO: Wait up to 5m0s for pod "pvc-tester-xfnj4" to be fully deleted
Sep  8 04:18:44.329: INFO: Deleting PVC pvc-k96pt to trigger reclamation of PV 
Sep  8 04:18:44.329: INFO: Deleting PersistentVolumeClaim "pvc-k96pt"
Sep  8 04:18:44.492: INFO: Waiting for reclaim process to complete.
... skipping 7 lines ...
Sep  8 04:18:57.642: INFO: PersistentVolume nfs-ss86z found and phase=Available (13.149933156s)
Sep  8 04:18:57.804: INFO: PV nfs-ss86z now in "Available" phase
STEP: Re-mounting the volume.
Sep  8 04:18:57.969: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-6dhtc] to have phase Bound
Sep  8 04:18:58.131: INFO: PersistentVolumeClaim pvc-6dhtc found and phase=Bound (161.825882ms)
STEP: Verifying the mount has been cleaned.
Sep  8 04:18:58.296: INFO: Waiting up to 5m0s for pod "pvc-tester-nrqqj" in namespace "pv-7460" to be "Succeeded or Failed"
Sep  8 04:18:58.458: INFO: Pod "pvc-tester-nrqqj": Phase="Pending", Reason="", readiness=false. Elapsed: 162.60196ms
Sep  8 04:19:00.621: INFO: Pod "pvc-tester-nrqqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.325771366s
STEP: Saw pod success
Sep  8 04:19:00.621: INFO: Pod "pvc-tester-nrqqj" satisfied condition "Succeeded or Failed"
Sep  8 04:19:00.621: INFO: Deleting pod "pvc-tester-nrqqj" in namespace "pv-7460"
Sep  8 04:19:00.793: INFO: Wait up to 5m0s for pod "pvc-tester-nrqqj" to be fully deleted
Sep  8 04:19:00.956: INFO: Pod exited without failure; the volume has been recycled.
Sep  8 04:19:00.956: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
Sep  8 04:19:00.956: INFO: Deleting PVC pvc-6dhtc to trigger reclamation of PV 
Sep  8 04:19:00.956: INFO: Deleting PersistentVolumeClaim "pvc-6dhtc"
... skipping 60 lines ...
• [SLOW TEST:13.331 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":5,"skipped":67,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep  8 04:19:13.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
Sep  8 04:19:14.541: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:19:14.857: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6790" in namespace "provisioning-6790" to be "Succeeded or Failed"
Sep  8 04:19:15.023: INFO: Pod "hostpath-symlink-prep-provisioning-6790": Phase="Pending", Reason="", readiness=false. Elapsed: 165.915021ms
Sep  8 04:19:17.179: INFO: Pod "hostpath-symlink-prep-provisioning-6790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.322257673s
STEP: Saw pod success
Sep  8 04:19:17.179: INFO: Pod "hostpath-symlink-prep-provisioning-6790" satisfied condition "Succeeded or Failed"
Sep  8 04:19:17.179: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6790" in namespace "provisioning-6790"
Sep  8 04:19:17.341: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6790" to be fully deleted
Sep  8 04:19:17.496: INFO: Creating resource for inline volume
Sep  8 04:19:17.497: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Sep  8 04:19:17.497: INFO: Deleting pod "pod-subpath-test-inlinevolume-zvd4" in namespace "provisioning-6790"
Sep  8 04:19:17.810: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6790" in namespace "provisioning-6790" to be "Succeeded or Failed"
Sep  8 04:19:17.966: INFO: Pod "hostpath-symlink-prep-provisioning-6790": Phase="Pending", Reason="", readiness=false. Elapsed: 155.938716ms
Sep  8 04:19:20.212: INFO: Pod "hostpath-symlink-prep-provisioning-6790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.401501864s
STEP: Saw pod success
Sep  8 04:19:20.212: INFO: Pod "hostpath-symlink-prep-provisioning-6790" satisfied condition "Succeeded or Failed"
Sep  8 04:19:20.212: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6790" in namespace "provisioning-6790"
Sep  8 04:19:20.389: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6790" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:19:20.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6790" for this suite.
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":24,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":2,"skipped":43,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:18.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a read only busybox container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:25.060: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 19 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:18:34.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep  8 04:18:35.447: INFO: PodSpec: initContainers in spec.initContainers
Sep  8 04:19:25.596: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-036cd6ab-c3e0-4256-b7bf-3f6ca006f823", GenerateName:"", Namespace:"init-container-6636", SelfLink:"", UID:"9e309b48-da9a-48ea-9131-1b6d71731d35", ResourceVersion:"5406", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766671515, loc:(*time.Location)(0x9de2b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"446998401"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0029564f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002956510)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002956528), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002956540)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-lhww8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc003648640), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-lhww8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-lhww8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-lhww8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003bd6a00), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-53-124.ap-northeast-2.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021ca5b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003bd6a80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003bd6aa0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003bd6aa8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003bd6aac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003c664b0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671515, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671515, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671515, loc:(*time.Location)(0x9de2b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766671515, loc:(*time.Location)(0x9de2b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.53.124", PodIP:"100.96.3.241", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.3.241"}}, StartTime:(*v1.Time)(0xc002956570), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021ca690)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0021ca770)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://0d88135cd12825d03fea478f3e7cc52fcf4d59faa657bc62a3546a07586f6444", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036486e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0036486a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003bd6b2f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:19:25.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6636" for this suite.


• [SLOW TEST:51.271 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 8 lines ...
Sep  8 04:18:54.384: INFO: Creating resource for dynamic PV
Sep  8 04:18:54.384: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-5222vx26h
STEP: creating a claim
STEP: Expanding non-expandable pvc
Sep  8 04:18:54.863: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep  8 04:18:55.186: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:18:57.506: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:18:59.508: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:01.509: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:03.507: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:05.506: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:07.522: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:09.507: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:11.510: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:13.505: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:15.511: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:17.505: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:19.515: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:21.505: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:23.510: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:25.521: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5222vx26h",
  	... // 2 identical fields
  }

Sep  8 04:19:25.839: INFO: Error updating pvc awswc7kf: PersistentVolumeClaim "awswc7kf" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":3,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 95 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:28.942: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-1065/configmap-test-46fa83fb-9558-4ae0-8889-582e72e95bf1
STEP: Creating a pod to test consume configMaps
Sep  8 04:19:23.829: INFO: Waiting up to 5m0s for pod "pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763" in namespace "configmap-1065" to be "Succeeded or Failed"
Sep  8 04:19:23.993: INFO: Pod "pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763": Phase="Pending", Reason="", readiness=false. Elapsed: 163.700968ms
Sep  8 04:19:26.156: INFO: Pod "pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326404024s
Sep  8 04:19:28.320: INFO: Pod "pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.490028003s
STEP: Saw pod success
Sep  8 04:19:28.320: INFO: Pod "pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763" satisfied condition "Succeeded or Failed"
Sep  8 04:19:28.482: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763 container env-test: <nil>
STEP: delete the pod
Sep  8 04:19:28.852: INFO: Waiting for pod pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763 to disappear
Sep  8 04:19:29.015: INFO: Pod pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.658 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:16.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
• [SLOW TEST:16.849 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":4,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:6.938 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:33.603: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 65 lines ...
• [SLOW TEST:45.964 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:35.115: INFO: Only supported for providers [vsphere] (not aws)
... skipping 51 lines ...
• [SLOW TEST:14.134 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Sep  8 04:19:24.336: INFO: PersistentVolumeClaim pvc-ttpqb found but phase is Pending instead of Bound.
Sep  8 04:19:26.495: INFO: PersistentVolumeClaim pvc-ttpqb found and phase=Bound (13.134688075s)
Sep  8 04:19:26.496: INFO: Waiting up to 3m0s for PersistentVolume local-q92bn to have phase Bound
Sep  8 04:19:26.655: INFO: PersistentVolume local-q92bn found and phase=Bound (159.106754ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8vkc
STEP: Creating a pod to test subpath
Sep  8 04:19:27.132: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8vkc" in namespace "provisioning-8841" to be "Succeeded or Failed"
Sep  8 04:19:27.291: INFO: Pod "pod-subpath-test-preprovisionedpv-8vkc": Phase="Pending", Reason="", readiness=false. Elapsed: 158.920769ms
Sep  8 04:19:29.452: INFO: Pod "pod-subpath-test-preprovisionedpv-8vkc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319463881s
Sep  8 04:19:31.612: INFO: Pod "pod-subpath-test-preprovisionedpv-8vkc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.47950171s
STEP: Saw pod success
Sep  8 04:19:31.612: INFO: Pod "pod-subpath-test-preprovisionedpv-8vkc" satisfied condition "Succeeded or Failed"
Sep  8 04:19:31.772: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-8vkc container test-container-volume-preprovisionedpv-8vkc: <nil>
STEP: delete the pod
Sep  8 04:19:32.100: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8vkc to disappear
Sep  8 04:19:32.259: INFO: Pod pod-subpath-test-preprovisionedpv-8vkc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8vkc
Sep  8 04:19:32.259: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8vkc" in namespace "provisioning-8841"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:35.723: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 38 lines ...
• [SLOW TEST:6.492 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":5,"skipped":35,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:33.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep  8 04:19:34.134: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:19:40.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3139" for this suite.


• [SLOW TEST:7.576 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:40.813: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
• [SLOW TEST:16.241 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":4,"skipped":52,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:793
    apply set/view last-applied
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:828
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":3,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:21.815 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":68,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:42.240: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:42.293: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 220 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:43.143: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 152 lines ...
Sep  8 04:19:35.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep  8 04:19:36.715: INFO: Waiting up to 5m0s for pod "test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b" in namespace "svcaccounts-5961" to be "Succeeded or Failed"
Sep  8 04:19:36.878: INFO: Pod "test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b": Phase="Pending", Reason="", readiness=false. Elapsed: 163.512294ms
Sep  8 04:19:39.085: INFO: Pod "test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.369954565s
Sep  8 04:19:41.245: INFO: Pod "test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53037997s
Sep  8 04:19:43.410: INFO: Pod "test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695613415s
Sep  8 04:19:45.579: INFO: Pod "test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.863956151s
STEP: Saw pod success
Sep  8 04:19:45.579: INFO: Pod "test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b" satisfied condition "Succeeded or Failed"
Sep  8 04:19:45.740: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:19:46.162: INFO: Waiting for pod test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b to disappear
Sep  8 04:19:46.324: INFO: Pod test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.928 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:46.691: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":6,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:46.757: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:29.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:46.779: INFO: Only supported for providers [vsphere] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-f2294486-96c3-4edb-85ca-50da53b5d232
STEP: Creating a pod to test consume configMaps
Sep  8 04:19:43.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92" in namespace "configmap-206" to be "Succeeded or Failed"
Sep  8 04:19:43.717: INFO: Pod "pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92": Phase="Pending", Reason="", readiness=false. Elapsed: 157.822866ms
Sep  8 04:19:45.962: INFO: Pod "pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.402086048s
Sep  8 04:19:48.169: INFO: Pod "pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.609612678s
STEP: Saw pod success
Sep  8 04:19:48.169: INFO: Pod "pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92" satisfied condition "Succeeded or Failed"
Sep  8 04:19:48.353: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:19:48.698: INFO: Waiting for pod pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92 to disappear
Sep  8 04:19:48.858: INFO: Pod pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.734 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:49.195: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 46 lines ...
Sep  8 04:19:40.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep  8 04:19:41.765: INFO: Waiting up to 5m0s for pod "pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63" in namespace "emptydir-1842" to be "Succeeded or Failed"
Sep  8 04:19:41.922: INFO: Pod "pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63": Phase="Pending", Reason="", readiness=false. Elapsed: 156.518036ms
Sep  8 04:19:44.079: INFO: Pod "pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313415535s
Sep  8 04:19:46.236: INFO: Pod "pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470505508s
Sep  8 04:19:48.398: INFO: Pod "pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.632532854s
STEP: Saw pod success
Sep  8 04:19:48.398: INFO: Pod "pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63" satisfied condition "Succeeded or Failed"
Sep  8 04:19:48.557: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63 container test-container: <nil>
STEP: delete the pod
Sep  8 04:19:48.896: INFO: Waiting for pod pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63 to disappear
Sep  8 04:19:49.054: INFO: Pod pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.548 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:49.383: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 107 lines ...
• [SLOW TEST:9.308 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:56.128: INFO: Only supported for providers [azure] (not aws)
... skipping 84 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:56.388: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 220 lines ...
  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:40
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:44.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:13.256 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:58.061: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox Pod with hostAliases
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:19:58.771: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
• [SLOW TEST:49.079 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":2,"skipped":17,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:00.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5646" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:01.003: INFO: Driver local doesn't support ext4 -- skipping
... skipping 32 lines ...
STEP: Destroying namespace "node-problem-detector-9022" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [1.138 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 22 lines ...
• [SLOW TEST:13.262 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":7,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:02.708: INFO: Only supported for providers [gce gke] (not aws)
... skipping 449 lines ...
• [SLOW TEST:73.419 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":6,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:03.405: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 115 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:05.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5509" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":8,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:06.065: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 68 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Sep  8 04:19:57.501: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347" in namespace "security-context-test-8987" to be "Succeeded or Failed"
Sep  8 04:19:57.661: INFO: Pod "busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347": Phase="Pending", Reason="", readiness=false. Elapsed: 159.608445ms
Sep  8 04:19:59.827: INFO: Pod "busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325550392s
Sep  8 04:20:01.987: INFO: Pod "busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485819138s
Sep  8 04:20:04.150: INFO: Pod "busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648410848s
Sep  8 04:20:06.310: INFO: Pod "busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347": Phase="Failed", Reason="", readiness=false. Elapsed: 8.809049497s
Sep  8 04:20:06.310: INFO: Pod "busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:06.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8987" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":60,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:06.667: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 195 lines ...
Sep  8 04:19:52.757: INFO: PersistentVolumeClaim pvc-7hx4f found but phase is Pending instead of Bound.
Sep  8 04:19:54.920: INFO: PersistentVolumeClaim pvc-7hx4f found and phase=Bound (4.486167273s)
Sep  8 04:19:54.920: INFO: Waiting up to 3m0s for PersistentVolume local-n7rqn to have phase Bound
Sep  8 04:19:55.082: INFO: PersistentVolume local-n7rqn found and phase=Bound (162.115906ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lsfk
STEP: Creating a pod to test subpath
Sep  8 04:19:55.570: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lsfk" in namespace "provisioning-2880" to be "Succeeded or Failed"
Sep  8 04:19:55.736: INFO: Pod "pod-subpath-test-preprovisionedpv-lsfk": Phase="Pending", Reason="", readiness=false. Elapsed: 166.3882ms
Sep  8 04:19:57.900: INFO: Pod "pod-subpath-test-preprovisionedpv-lsfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329791301s
Sep  8 04:20:00.063: INFO: Pod "pod-subpath-test-preprovisionedpv-lsfk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492964188s
Sep  8 04:20:02.226: INFO: Pod "pod-subpath-test-preprovisionedpv-lsfk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.656253458s
Sep  8 04:20:04.390: INFO: Pod "pod-subpath-test-preprovisionedpv-lsfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.820042676s
STEP: Saw pod success
Sep  8 04:20:04.390: INFO: Pod "pod-subpath-test-preprovisionedpv-lsfk" satisfied condition "Succeeded or Failed"
Sep  8 04:20:04.554: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-lsfk container test-container-subpath-preprovisionedpv-lsfk: <nil>
STEP: delete the pod
Sep  8 04:20:04.889: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lsfk to disappear
Sep  8 04:20:05.051: INFO: Pod pod-subpath-test-preprovisionedpv-lsfk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lsfk
Sep  8 04:20:05.051: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lsfk" in namespace "provisioning-2880"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":80,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:09.563: INFO: Only supported for providers [gce gke] (not aws)
... skipping 108 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-b1e7b3f2-303e-4d5d-bf90-44023a8b6c9b
STEP: Creating a pod to test consume configMaps
Sep  8 04:20:02.148: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf" in namespace "projected-4347" to be "Succeeded or Failed"
Sep  8 04:20:02.305: INFO: Pod "pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf": Phase="Pending", Reason="", readiness=false. Elapsed: 156.942905ms
Sep  8 04:20:04.463: INFO: Pod "pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314009203s
Sep  8 04:20:06.620: INFO: Pod "pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471143238s
Sep  8 04:20:08.778: INFO: Pod "pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629839965s
STEP: Saw pod success
Sep  8 04:20:08.778: INFO: Pod "pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf" satisfied condition "Succeeded or Failed"
Sep  8 04:20:08.935: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep  8 04:20:09.256: INFO: Waiting for pod pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf to disappear
Sep  8 04:20:09.412: INFO: Pod pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.683 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:09.737: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:11.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-124" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":10,"skipped":67,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:11.418: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:13.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-1685" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":11,"skipped":94,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:59.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Sep  8 04:20:08.435: INFO: Creating a PV followed by a PVC
Sep  8 04:20:08.758: INFO: Waiting for PV local-pv8lbcc to bind to PVC pvc-gp4tg
Sep  8 04:20:08.758: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gp4tg] to have phase Bound
Sep  8 04:20:08.918: INFO: PersistentVolumeClaim pvc-gp4tg found and phase=Bound (160.070631ms)
Sep  8 04:20:08.918: INFO: Waiting up to 3m0s for PersistentVolume local-pv8lbcc to have phase Bound
Sep  8 04:20:09.079: INFO: PersistentVolume local-pv8lbcc found and phase=Bound (160.63911ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
STEP: Initializing test volumes
Sep  8 04:20:09.400: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-6d051f75-7ab3-415e-95b9-760a2841a710] Namespace:persistent-local-volumes-test-7622 PodName:hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-nqngz ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep  8 04:20:09.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:13.860 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:13.628: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
... skipping 11 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":5,"skipped":14,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:13.653: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 213 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:12.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":4,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:18.189: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-3919/configmap-test-ac2cf928-9508-47e0-886d-9128a67d23f3
STEP: Creating a pod to test consume configMaps
Sep  8 04:20:14.971: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7" in namespace "configmap-3919" to be "Succeeded or Failed"
Sep  8 04:20:15.131: INFO: Pod "pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7": Phase="Pending", Reason="", readiness=false. Elapsed: 160.318882ms
Sep  8 04:20:17.292: INFO: Pod "pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321157458s
STEP: Saw pod success
Sep  8 04:20:17.292: INFO: Pod "pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7" satisfied condition "Succeeded or Failed"
Sep  8 04:20:17.452: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7 container env-test: <nil>
STEP: delete the pod
Sep  8 04:20:17.785: INFO: Waiting for pod pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7 to disappear
Sep  8 04:20:17.945: INFO: Pod pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:17.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3919" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:70.724 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:21.127: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:51.178: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Sep  8 04:20:09.132: INFO: PersistentVolumeClaim pvc-ql6l5 found but phase is Pending instead of Bound.
Sep  8 04:20:11.293: INFO: PersistentVolumeClaim pvc-ql6l5 found and phase=Bound (13.125451895s)
Sep  8 04:20:11.293: INFO: Waiting up to 3m0s for PersistentVolume local-mpqz4 to have phase Bound
Sep  8 04:20:11.452: INFO: PersistentVolume local-mpqz4 found and phase=Bound (159.253208ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-p5tq
STEP: Creating a pod to test subpath
Sep  8 04:20:11.932: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-p5tq" in namespace "provisioning-7445" to be "Succeeded or Failed"
Sep  8 04:20:12.094: INFO: Pod "pod-subpath-test-preprovisionedpv-p5tq": Phase="Pending", Reason="", readiness=false. Elapsed: 161.858535ms
Sep  8 04:20:14.255: INFO: Pod "pod-subpath-test-preprovisionedpv-p5tq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322488231s
Sep  8 04:20:16.421: INFO: Pod "pod-subpath-test-preprovisionedpv-p5tq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488886942s
Sep  8 04:20:18.582: INFO: Pod "pod-subpath-test-preprovisionedpv-p5tq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.649871047s
STEP: Saw pod success
Sep  8 04:20:18.582: INFO: Pod "pod-subpath-test-preprovisionedpv-p5tq" satisfied condition "Succeeded or Failed"
Sep  8 04:20:18.750: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-p5tq container test-container-volume-preprovisionedpv-p5tq: <nil>
STEP: delete the pod
Sep  8 04:20:19.085: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-p5tq to disappear
Sep  8 04:20:19.244: INFO: Pod pod-subpath-test-preprovisionedpv-p5tq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-p5tq
Sep  8 04:20:19.244: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-p5tq" in namespace "provisioning-7445"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":58,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:21.534: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 161 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":40,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:21.962: INFO: Driver local doesn't support ext4 -- skipping
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:22.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-9329" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":5,"skipped":40,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 164 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:23.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1338" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":6,"skipped":54,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Sep  8 04:19:53.987: INFO: PersistentVolumeClaim pvc-rzhbx found but phase is Pending instead of Bound.
Sep  8 04:19:56.147: INFO: PersistentVolumeClaim pvc-rzhbx found and phase=Bound (4.479167674s)
Sep  8 04:19:56.147: INFO: Waiting up to 3m0s for PersistentVolume local-r2jtn to have phase Bound
Sep  8 04:19:56.310: INFO: PersistentVolume local-r2jtn found and phase=Bound (163.275553ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-m67x
STEP: Creating a pod to test atomic-volume-subpath
Sep  8 04:19:56.791: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-m67x" in namespace "provisioning-6490" to be "Succeeded or Failed"
Sep  8 04:19:56.950: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Pending", Reason="", readiness=false. Elapsed: 159.043581ms
Sep  8 04:19:59.110: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319031976s
Sep  8 04:20:01.276: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484927467s
Sep  8 04:20:03.437: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 6.645672801s
Sep  8 04:20:05.598: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 8.807446519s
Sep  8 04:20:07.759: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 10.968442853s
Sep  8 04:20:09.930: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 13.139406002s
Sep  8 04:20:12.091: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 15.29977774s
Sep  8 04:20:14.251: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 17.460240007s
Sep  8 04:20:16.414: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 19.622800497s
Sep  8 04:20:18.574: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Running", Reason="", readiness=true. Elapsed: 21.783463167s
Sep  8 04:20:20.735: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.943505727s
STEP: Saw pod success
Sep  8 04:20:20.735: INFO: Pod "pod-subpath-test-preprovisionedpv-m67x" satisfied condition "Succeeded or Failed"
Sep  8 04:20:20.899: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-m67x container test-container-subpath-preprovisionedpv-m67x: <nil>
STEP: delete the pod
Sep  8 04:20:21.227: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-m67x to disappear
Sep  8 04:20:21.386: INFO: Pod pod-subpath-test-preprovisionedpv-m67x no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-m67x
Sep  8 04:20:21.386: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-m67x" in namespace "provisioning-6490"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":44,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:8.586 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":5,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:26.798: INFO: Only supported for providers [azure] (not aws)
... skipping 37 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:23.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 81 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-b6b637f3-73c7-49e5-a132-bd9dda012431
STEP: Creating a pod to test consume configMaps
Sep  8 04:20:23.521: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15" in namespace "projected-7206" to be "Succeeded or Failed"
Sep  8 04:20:23.682: INFO: Pod "pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15": Phase="Pending", Reason="", readiness=false. Elapsed: 160.740235ms
Sep  8 04:20:25.838: INFO: Pod "pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317316706s
Sep  8 04:20:27.997: INFO: Pod "pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475878174s
STEP: Saw pod success
Sep  8 04:20:27.997: INFO: Pod "pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15" satisfied condition "Succeeded or Failed"
Sep  8 04:20:28.153: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:20:28.499: INFO: Waiting for pod pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15 to disappear
Sep  8 04:20:28.659: INFO: Pod pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.568 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:28.984: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 27 lines ...
Sep  8 04:19:29.768: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-5495494wj
STEP: creating a claim
Sep  8 04:19:29.929: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-cn5h
STEP: Creating a pod to test subpath
Sep  8 04:19:30.446: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cn5h" in namespace "provisioning-5495" to be "Succeeded or Failed"
Sep  8 04:19:30.634: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 188.643875ms
Sep  8 04:19:32.831: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.38558677s
Sep  8 04:19:35.040: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.594073159s
Sep  8 04:19:37.205: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.759584203s
Sep  8 04:19:39.417: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.971482274s
Sep  8 04:19:41.576: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 11.130566834s
... skipping 4 lines ...
Sep  8 04:19:52.430: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 21.983707671s
Sep  8 04:19:54.598: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 24.151699855s
Sep  8 04:19:56.759: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 26.313039601s
Sep  8 04:19:58.921: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Pending", Reason="", readiness=false. Elapsed: 28.475364969s
Sep  8 04:20:01.084: INFO: Pod "pod-subpath-test-dynamicpv-cn5h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.637874453s
STEP: Saw pod success
Sep  8 04:20:01.084: INFO: Pod "pod-subpath-test-dynamicpv-cn5h" satisfied condition "Succeeded or Failed"
Sep  8 04:20:01.245: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-cn5h container test-container-subpath-dynamicpv-cn5h: <nil>
STEP: delete the pod
Sep  8 04:20:01.582: INFO: Waiting for pod pod-subpath-test-dynamicpv-cn5h to disappear
Sep  8 04:20:01.743: INFO: Pod pod-subpath-test-dynamicpv-cn5h no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-cn5h
Sep  8 04:20:01.743: INFO: Deleting pod "pod-subpath-test-dynamicpv-cn5h" in namespace "provisioning-5495"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:29.100: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":29,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-rz2r
STEP: Creating a pod to test atomic-volume-subpath
Sep  8 04:20:08.056: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rz2r" in namespace "subpath-4930" to be "Succeeded or Failed"
Sep  8 04:20:08.216: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 159.813841ms
Sep  8 04:20:10.376: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320058835s
Sep  8 04:20:12.537: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480238587s
Sep  8 04:20:14.698: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.641653687s
Sep  8 04:20:16.859: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Running", Reason="", readiness=true. Elapsed: 8.802152817s
Sep  8 04:20:19.020: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Running", Reason="", readiness=true. Elapsed: 10.96354663s
Sep  8 04:20:21.183: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Running", Reason="", readiness=true. Elapsed: 13.126376345s
Sep  8 04:20:23.360: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Running", Reason="", readiness=true. Elapsed: 15.303436324s
Sep  8 04:20:25.521: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Running", Reason="", readiness=true. Elapsed: 17.464420255s
Sep  8 04:20:27.681: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Running", Reason="", readiness=true. Elapsed: 19.624271643s
Sep  8 04:20:29.841: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Running", Reason="", readiness=true. Elapsed: 21.784591516s
Sep  8 04:20:32.004: INFO: Pod "pod-subpath-test-configmap-rz2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.947302298s
STEP: Saw pod success
Sep  8 04:20:32.004: INFO: Pod "pod-subpath-test-configmap-rz2r" satisfied condition "Succeeded or Failed"
Sep  8 04:20:32.163: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-subpath-test-configmap-rz2r container test-container-subpath-configmap-rz2r: <nil>
STEP: delete the pod
Sep  8 04:20:32.497: INFO: Waiting for pod pod-subpath-test-configmap-rz2r to disappear
Sep  8 04:20:32.656: INFO: Pod pod-subpath-test-configmap-rz2r no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rz2r
Sep  8 04:20:32.656: INFO: Deleting pod "pod-subpath-test-configmap-rz2r" in namespace "subpath-4930"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":79,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:33.158: INFO: Only supported for providers [vsphere] (not aws)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":7,"skipped":22,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:27.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:6.064 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":8,"skipped":22,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:33.637: INFO: Only supported for providers [azure] (not aws)
... skipping 80 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should contain last line of the log
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:605
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":7,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:34.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-6045" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":8,"skipped":92,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:34.693: INFO: Only supported for providers [gce gke] (not aws)
... skipping 68 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-0882697a-0df9-4672-ad6f-d71eccabb2a8
STEP: Creating a pod to test consume secrets
Sep  8 04:20:30.444: INFO: Waiting up to 5m0s for pod "pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142" in namespace "secrets-3774" to be "Succeeded or Failed"
Sep  8 04:20:30.602: INFO: Pod "pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142": Phase="Pending", Reason="", readiness=false. Elapsed: 158.060974ms
Sep  8 04:20:32.759: INFO: Pod "pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314670575s
Sep  8 04:20:34.915: INFO: Pod "pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.471339926s
STEP: Saw pod success
Sep  8 04:20:34.915: INFO: Pod "pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142" satisfied condition "Succeeded or Failed"
Sep  8 04:20:35.072: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142 container secret-volume-test: <nil>
STEP: delete the pod
Sep  8 04:20:35.420: INFO: Waiting for pod pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142 to disappear
Sep  8 04:20:35.576: INFO: Pod pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.546 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:35.911: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:37.300: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":68,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:39.274 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:40.597: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 167 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":9,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:45.032: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-f00a8d0d-832f-4668-98ea-e700bddbeaa8
STEP: Creating a pod to test consume configMaps
Sep  8 04:20:41.557: INFO: Waiting up to 5m0s for pod "pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168" in namespace "configmap-4076" to be "Succeeded or Failed"
Sep  8 04:20:41.728: INFO: Pod "pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168": Phase="Pending", Reason="", readiness=false. Elapsed: 170.774008ms
Sep  8 04:20:43.893: INFO: Pod "pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335799645s
Sep  8 04:20:46.058: INFO: Pod "pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.500772477s
STEP: Saw pod success
Sep  8 04:20:46.058: INFO: Pod "pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168" satisfied condition "Succeeded or Failed"
Sep  8 04:20:46.226: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:20:46.570: INFO: Waiting for pod pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168 to disappear
Sep  8 04:20:46.735: INFO: Pod pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.637 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 81 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:47.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Sep  8 04:20:48.672: INFO: found topology map[topology.kubernetes.io/zone:ap-northeast-2a]
Sep  8 04:20:48.672: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep  8 04:20:48.672: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 49 lines ...
• [SLOW TEST:26.608 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:50.905: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
Sep  8 04:20:40.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep  8 04:20:41.603: INFO: Waiting up to 5m0s for pod "security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4" in namespace "security-context-7507" to be "Succeeded or Failed"
Sep  8 04:20:41.767: INFO: Pod "security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 164.373456ms
Sep  8 04:20:43.929: INFO: Pod "security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326133931s
Sep  8 04:20:46.091: INFO: Pod "security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488075479s
Sep  8 04:20:48.256: INFO: Pod "security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.653271807s
Sep  8 04:20:50.539: INFO: Pod "security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.936651639s
STEP: Saw pod success
Sep  8 04:20:50.540: INFO: Pod "security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4" satisfied condition "Succeeded or Failed"
Sep  8 04:20:50.777: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4 container test-container: <nil>
STEP: delete the pod
Sep  8 04:20:51.273: INFO: Waiting for pod security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4 to disappear
Sep  8 04:20:51.493: INFO: Pod security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.276 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:51.927: INFO: Driver local doesn't support ext4 -- skipping
... skipping 122 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":9,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:53.549: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 112 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:53.959: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:20:53.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1408" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":8,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:54.439: INFO: Only supported for providers [gce gke] (not aws)
... skipping 59 lines ...
Sep  8 04:20:38.430: INFO: PersistentVolumeClaim pvc-mgvtl found but phase is Pending instead of Bound.
Sep  8 04:20:40.591: INFO: PersistentVolumeClaim pvc-mgvtl found and phase=Bound (6.648762326s)
Sep  8 04:20:40.591: INFO: Waiting up to 3m0s for PersistentVolume local-drfwl to have phase Bound
Sep  8 04:20:40.752: INFO: PersistentVolume local-drfwl found and phase=Bound (161.370504ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7fsz
STEP: Creating a pod to test subpath
Sep  8 04:20:41.241: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7fsz" in namespace "provisioning-6253" to be "Succeeded or Failed"
Sep  8 04:20:41.403: INFO: Pod "pod-subpath-test-preprovisionedpv-7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 161.674606ms
Sep  8 04:20:43.573: INFO: Pod "pod-subpath-test-preprovisionedpv-7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332241566s
Sep  8 04:20:45.736: INFO: Pod "pod-subpath-test-preprovisionedpv-7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.494898265s
Sep  8 04:20:47.898: INFO: Pod "pod-subpath-test-preprovisionedpv-7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.657559632s
Sep  8 04:20:50.122: INFO: Pod "pod-subpath-test-preprovisionedpv-7fsz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.88118513s
Sep  8 04:20:52.296: INFO: Pod "pod-subpath-test-preprovisionedpv-7fsz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.054991875s
STEP: Saw pod success
Sep  8 04:20:52.296: INFO: Pod "pod-subpath-test-preprovisionedpv-7fsz" satisfied condition "Succeeded or Failed"
Sep  8 04:20:52.460: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-7fsz container test-container-volume-preprovisionedpv-7fsz: <nil>
STEP: delete the pod
Sep  8 04:20:53.161: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7fsz to disappear
Sep  8 04:20:53.352: INFO: Pod pod-subpath-test-preprovisionedpv-7fsz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7fsz
Sep  8 04:20:53.352: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7fsz" in namespace "provisioning-6253"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:56.060: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":100,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:47.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":13,"skipped":100,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 16 lines ...
Sep  8 04:20:38.916: INFO: PersistentVolumeClaim pvc-9l5xt found but phase is Pending instead of Bound.
Sep  8 04:20:41.075: INFO: PersistentVolumeClaim pvc-9l5xt found and phase=Bound (2.320260678s)
Sep  8 04:20:41.075: INFO: Waiting up to 3m0s for PersistentVolume local-nxpns to have phase Bound
Sep  8 04:20:41.234: INFO: PersistentVolume local-nxpns found and phase=Bound (158.995951ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ksxt
STEP: Creating a pod to test exec-volume-test
Sep  8 04:20:41.711: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ksxt" in namespace "volume-1554" to be "Succeeded or Failed"
Sep  8 04:20:41.869: INFO: Pod "exec-volume-test-preprovisionedpv-ksxt": Phase="Pending", Reason="", readiness=false. Elapsed: 158.46078ms
Sep  8 04:20:44.024: INFO: Pod "exec-volume-test-preprovisionedpv-ksxt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313138776s
Sep  8 04:20:46.179: INFO: Pod "exec-volume-test-preprovisionedpv-ksxt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468503385s
Sep  8 04:20:48.335: INFO: Pod "exec-volume-test-preprovisionedpv-ksxt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623836416s
Sep  8 04:20:50.596: INFO: Pod "exec-volume-test-preprovisionedpv-ksxt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.88495941s
Sep  8 04:20:52.805: INFO: Pod "exec-volume-test-preprovisionedpv-ksxt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.094021331s
STEP: Saw pod success
Sep  8 04:20:52.805: INFO: Pod "exec-volume-test-preprovisionedpv-ksxt" satisfied condition "Succeeded or Failed"
Sep  8 04:20:53.126: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-ksxt container exec-container-preprovisionedpv-ksxt: <nil>
STEP: delete the pod
Sep  8 04:20:53.571: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ksxt to disappear
Sep  8 04:20:53.775: INFO: Pod exec-volume-test-preprovisionedpv-ksxt no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ksxt
Sep  8 04:20:53.775: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ksxt" in namespace "volume-1554"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:20:57.553: INFO: Only supported for providers [vsphere] (not aws)
... skipping 215 lines ...
Sep  8 04:20:47.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep  8 04:20:47.916: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:20:48.249: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4906" in namespace "provisioning-4906" to be "Succeeded or Failed"
Sep  8 04:20:48.414: INFO: Pod "hostpath-symlink-prep-provisioning-4906": Phase="Pending", Reason="", readiness=false. Elapsed: 164.693773ms
Sep  8 04:20:50.632: INFO: Pod "hostpath-symlink-prep-provisioning-4906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383192106s
Sep  8 04:20:52.852: INFO: Pod "hostpath-symlink-prep-provisioning-4906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.603356445s
STEP: Saw pod success
Sep  8 04:20:52.852: INFO: Pod "hostpath-symlink-prep-provisioning-4906" satisfied condition "Succeeded or Failed"
Sep  8 04:20:52.853: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4906" in namespace "provisioning-4906"
Sep  8 04:20:53.234: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4906" to be fully deleted
Sep  8 04:20:53.533: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7k4m
STEP: Creating a pod to test subpath
Sep  8 04:20:53.763: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7k4m" in namespace "provisioning-4906" to be "Succeeded or Failed"
Sep  8 04:20:53.971: INFO: Pod "pod-subpath-test-inlinevolume-7k4m": Phase="Pending", Reason="", readiness=false. Elapsed: 207.190742ms
Sep  8 04:20:56.155: INFO: Pod "pod-subpath-test-inlinevolume-7k4m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391747542s
Sep  8 04:20:58.354: INFO: Pod "pod-subpath-test-inlinevolume-7k4m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.590811265s
STEP: Saw pod success
Sep  8 04:20:58.354: INFO: Pod "pod-subpath-test-inlinevolume-7k4m" satisfied condition "Succeeded or Failed"
Sep  8 04:20:58.522: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-7k4m container test-container-volume-inlinevolume-7k4m: <nil>
STEP: delete the pod
Sep  8 04:20:58.870: INFO: Waiting for pod pod-subpath-test-inlinevolume-7k4m to disappear
Sep  8 04:20:59.035: INFO: Pod pod-subpath-test-inlinevolume-7k4m no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7k4m
Sep  8 04:20:59.035: INFO: Deleting pod "pod-subpath-test-inlinevolume-7k4m" in namespace "provisioning-4906"
STEP: Deleting pod
Sep  8 04:20:59.199: INFO: Deleting pod "pod-subpath-test-inlinevolume-7k4m" in namespace "provisioning-4906"
Sep  8 04:20:59.530: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4906" in namespace "provisioning-4906" to be "Succeeded or Failed"
Sep  8 04:20:59.694: INFO: Pod "hostpath-symlink-prep-provisioning-4906": Phase="Pending", Reason="", readiness=false. Elapsed: 164.542152ms
Sep  8 04:21:01.863: INFO: Pod "hostpath-symlink-prep-provisioning-4906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.333361433s
STEP: Saw pod success
Sep  8 04:21:01.863: INFO: Pod "hostpath-symlink-prep-provisioning-4906" satisfied condition "Succeeded or Failed"
Sep  8 04:21:01.863: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4906" in namespace "provisioning-4906"
Sep  8 04:21:02.043: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4906" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:02.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4906" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":74,"failed":0}
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:21:02.552: INFO: >>> kubeConfig: /root/.kube/config
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:03.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-1936" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":10,"skipped":74,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:03.915: INFO: Only supported for providers [azure] (not aws)
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":8,"skipped":57,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:26.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:39.934 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:13.440 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":10,"skipped":72,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:07.049: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:08.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1475" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":10,"skipped":60,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:08.375: INFO: Only supported for providers [azure] (not aws)
... skipping 98 lines ...
• [SLOW TEST:31.972 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":7,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:09.357: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":4,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:19:28.673: INFO: >>> kubeConfig: /root/.kube/config
... skipping 7 lines ...
Sep  8 04:19:29.468: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-1752ntxfm      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1752    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1752ntxfm,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1752    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1752ntxfm,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1752    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1752ntxfm,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-1752ntxfm    c09f0a5a-09d9-454e-93ed-ad65d6e1e4e7 5591 0 2021-09-08 04:19:29 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-09-08 04:19:29 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-d46mj pvc- provisioning-1752  3ab2ed48-2621-4ad4-9fd9-90ed260e565b 5608 0 2021-09-08 04:19:30 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-09-08 04:19:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1752ntxfm,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-f4124f6b-3818-4f13-b550-504bfbc0f912 in namespace provisioning-1752
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Sep  8 04:20:03.396: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-cdzh4" in namespace "provisioning-1752" to be "Succeeded or Failed"
Sep  8 04:20:03.556: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 159.155787ms
Sep  8 04:20:05.714: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31811143s
Sep  8 04:20:07.878: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481530101s
Sep  8 04:20:10.037: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64086001s
Sep  8 04:20:12.197: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800583891s
Sep  8 04:20:14.357: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.960761262s
... skipping 7 lines ...
Sep  8 04:20:31.644: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.247590818s
Sep  8 04:20:33.804: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 30.407349406s
Sep  8 04:20:35.963: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 32.566357291s
Sep  8 04:20:38.123: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.72653957s
Sep  8 04:20:40.282: INFO: Pod "pvc-volume-tester-writer-cdzh4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.885657725s
STEP: Saw pod success
Sep  8 04:20:40.282: INFO: Pod "pvc-volume-tester-writer-cdzh4" satisfied condition "Succeeded or Failed"
Sep  8 04:20:40.636: INFO: Pod pvc-volume-tester-writer-cdzh4 has the following logs: 
Sep  8 04:20:40.636: INFO: Deleting pod "pvc-volume-tester-writer-cdzh4" in namespace "provisioning-1752"
Sep  8 04:20:40.806: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-cdzh4" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-53-124.ap-northeast-2.compute.internal"
Sep  8 04:20:41.455: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-dfhk6" in namespace "provisioning-1752" to be "Succeeded or Failed"
Sep  8 04:20:41.614: INFO: Pod "pvc-volume-tester-reader-dfhk6": Phase="Pending", Reason="", readiness=false. Elapsed: 158.402613ms
Sep  8 04:20:43.774: INFO: Pod "pvc-volume-tester-reader-dfhk6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31907127s
Sep  8 04:20:45.933: INFO: Pod "pvc-volume-tester-reader-dfhk6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47806773s
Sep  8 04:20:48.093: INFO: Pod "pvc-volume-tester-reader-dfhk6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.63832004s
STEP: Saw pod success
Sep  8 04:20:48.094: INFO: Pod "pvc-volume-tester-reader-dfhk6" satisfied condition "Succeeded or Failed"
Sep  8 04:20:48.422: INFO: Pod pvc-volume-tester-reader-dfhk6 has the following logs: hello world

Sep  8 04:20:48.422: INFO: Deleting pod "pvc-volume-tester-reader-dfhk6" in namespace "provisioning-1752"
Sep  8 04:20:48.586: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-dfhk6" to be fully deleted
Sep  8 04:20:48.748: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-d46mj] to have phase Bound
Sep  8 04:20:48.907: INFO: PersistentVolumeClaim pvc-d46mj found and phase=Bound (159.521434ms)
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":5,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:10.960: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":50,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:11.350: INFO: Only supported for providers [openstack] (not aws)
... skipping 48 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:11.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:11.511: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":96,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:13.743: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:21:09.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-51d5f32a-2952-406a-9b77-08cabf2b6408
STEP: Creating a pod to test consume configMaps
Sep  8 04:21:10.519: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4" in namespace "projected-7923" to be "Succeeded or Failed"
Sep  8 04:21:10.681: INFO: Pod "pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4": Phase="Pending", Reason="", readiness=false. Elapsed: 161.842346ms
Sep  8 04:21:12.843: INFO: Pod "pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324467848s
Sep  8 04:21:15.006: INFO: Pod "pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.486774313s
STEP: Saw pod success
Sep  8 04:21:15.006: INFO: Pod "pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4" satisfied condition "Succeeded or Failed"
Sep  8 04:21:15.168: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:21:15.500: INFO: Waiting for pod pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4 to disappear
Sep  8 04:21:15.665: INFO: Pod pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 15 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  8 04:21:14.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f" in namespace "downward-api-6034" to be "Succeeded or Failed"
Sep  8 04:21:14.943: INFO: Pod "downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f": Phase="Pending", Reason="", readiness=false. Elapsed: 161.64688ms
Sep  8 04:21:17.106: INFO: Pod "downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32418921s
Sep  8 04:21:19.269: INFO: Pod "downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.487076245s
STEP: Saw pod success
Sep  8 04:21:19.269: INFO: Pod "downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f" satisfied condition "Succeeded or Failed"
Sep  8 04:21:19.431: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f container client-container: <nil>
STEP: delete the pod
Sep  8 04:21:19.765: INFO: Waiting for pod downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f to disappear
Sep  8 04:21:19.928: INFO: Pod downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.453 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":107,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:20.284: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 56 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":83,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:21.148: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 134 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":7,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:22.109: INFO: Only supported for providers [gce gke] (not aws)
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:25.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7928" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":12,"skipped":94,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:25.861: INFO: Only supported for providers [azure] (not aws)
... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":40,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:29.159: INFO: Only supported for providers [gce gke] (not aws)
... skipping 227 lines ...
• [SLOW TEST:32.536 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":7,"skipped":33,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":9,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:30.176: INFO: Only supported for providers [azure] (not aws)
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:31.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2580" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:6.831 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:32.723: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 23 lines ...
Sep  8 04:21:32.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep  8 04:21:33.181: INFO: Waiting up to 5m0s for pod "security-context-5891998d-cd1e-4591-b403-666fe179971f" in namespace "security-context-1955" to be "Succeeded or Failed"
Sep  8 04:21:33.335: INFO: Pod "security-context-5891998d-cd1e-4591-b403-666fe179971f": Phase="Pending", Reason="", readiness=false. Elapsed: 153.994777ms
Sep  8 04:21:35.495: INFO: Pod "security-context-5891998d-cd1e-4591-b403-666fe179971f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.313426599s
STEP: Saw pod success
Sep  8 04:21:35.495: INFO: Pod "security-context-5891998d-cd1e-4591-b403-666fe179971f" satisfied condition "Succeeded or Failed"
Sep  8 04:21:35.661: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod security-context-5891998d-cd1e-4591-b403-666fe179971f container test-container: <nil>
STEP: delete the pod
Sep  8 04:21:35.998: INFO: Waiting for pod security-context-5891998d-cd1e-4591-b403-666fe179971f to disappear
Sep  8 04:21:36.152: INFO: Pod security-context-5891998d-cd1e-4591-b403-666fe179971f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:36.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-1955" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":58,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:36.515: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":14,"skipped":105,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:21:32.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep  8 04:21:33.181: INFO: Waiting up to 5m0s for pod "security-context-03df724c-55cd-47f9-aed5-ef398d2a5923" in namespace "security-context-4415" to be "Succeeded or Failed"
Sep  8 04:21:33.339: INFO: Pod "security-context-03df724c-55cd-47f9-aed5-ef398d2a5923": Phase="Pending", Reason="", readiness=false. Elapsed: 157.746459ms
Sep  8 04:21:35.499: INFO: Pod "security-context-03df724c-55cd-47f9-aed5-ef398d2a5923": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317973085s
Sep  8 04:21:37.661: INFO: Pod "security-context-03df724c-55cd-47f9-aed5-ef398d2a5923": Phase="Pending", Reason="", readiness=false. Elapsed: 4.480298773s
Sep  8 04:21:39.822: INFO: Pod "security-context-03df724c-55cd-47f9-aed5-ef398d2a5923": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.64141796s
STEP: Saw pod success
Sep  8 04:21:39.822: INFO: Pod "security-context-03df724c-55cd-47f9-aed5-ef398d2a5923" satisfied condition "Succeeded or Failed"
Sep  8 04:21:39.980: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod security-context-03df724c-55cd-47f9-aed5-ef398d2a5923 container test-container: <nil>
STEP: delete the pod
Sep  8 04:21:40.304: INFO: Waiting for pod security-context-03df724c-55cd-47f9-aed5-ef398d2a5923 to disappear
Sep  8 04:21:40.463: INFO: Pod security-context-03df724c-55cd-47f9-aed5-ef398d2a5923 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.556 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":15,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:40.792: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:43.354: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 71 lines ...
Sep  8 04:21:39.202: INFO: PersistentVolumeClaim pvc-rlww9 found but phase is Pending instead of Bound.
Sep  8 04:21:41.360: INFO: PersistentVolumeClaim pvc-rlww9 found and phase=Bound (6.66002131s)
Sep  8 04:21:41.360: INFO: Waiting up to 3m0s for PersistentVolume local-wsl8r to have phase Bound
Sep  8 04:21:41.518: INFO: PersistentVolume local-wsl8r found and phase=Bound (157.337818ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kzbz
STEP: Creating a pod to test subpath
Sep  8 04:21:41.991: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kzbz" in namespace "provisioning-5573" to be "Succeeded or Failed"
Sep  8 04:21:42.149: INFO: Pod "pod-subpath-test-preprovisionedpv-kzbz": Phase="Pending", Reason="", readiness=false. Elapsed: 158.258617ms
Sep  8 04:21:44.307: INFO: Pod "pod-subpath-test-preprovisionedpv-kzbz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316352793s
Sep  8 04:21:46.475: INFO: Pod "pod-subpath-test-preprovisionedpv-kzbz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.48376438s
STEP: Saw pod success
Sep  8 04:21:46.475: INFO: Pod "pod-subpath-test-preprovisionedpv-kzbz" satisfied condition "Succeeded or Failed"
Sep  8 04:21:46.640: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-kzbz container test-container-volume-preprovisionedpv-kzbz: <nil>
STEP: delete the pod
Sep  8 04:21:46.993: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kzbz to disappear
Sep  8 04:21:47.194: INFO: Pod pod-subpath-test-preprovisionedpv-kzbz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kzbz
Sep  8 04:21:47.195: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kzbz" in namespace "provisioning-5573"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":75,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:81.501 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:50.527: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 66 lines ...
• [SLOW TEST:9.885 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:53.265: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 148 lines ...
• [SLOW TEST:20.667 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":44,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":14,"skipped":102,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:53.420: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 258 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:496
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":5,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:55.097: INFO: Only supported for providers [azure] (not aws)
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:21:58.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8455" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":4,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:58.394: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 49 lines ...
Sep  8 04:21:40.250: INFO: PersistentVolumeClaim pvc-8gz2n found and phase=Bound (15.299817228s)
Sep  8 04:21:40.250: INFO: Waiting up to 3m0s for PersistentVolume nfs-v2hnz to have phase Bound
Sep  8 04:21:40.410: INFO: PersistentVolume nfs-v2hnz found and phase=Bound (160.035536ms)
STEP: Checking pod has write access to PersistentVolume
Sep  8 04:21:40.731: INFO: Creating nfs test pod
Sep  8 04:21:40.921: INFO: Pod should terminate with exitcode 0 (success)
Sep  8 04:21:40.921: INFO: Waiting up to 5m0s for pod "pvc-tester-xl2qp" in namespace "pv-7243" to be "Succeeded or Failed"
Sep  8 04:21:41.084: INFO: Pod "pvc-tester-xl2qp": Phase="Pending", Reason="", readiness=false. Elapsed: 162.530525ms
Sep  8 04:21:43.258: INFO: Pod "pvc-tester-xl2qp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336789172s
Sep  8 04:21:45.419: INFO: Pod "pvc-tester-xl2qp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497574841s
Sep  8 04:21:47.586: INFO: Pod "pvc-tester-xl2qp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.664387417s
STEP: Saw pod success
Sep  8 04:21:47.586: INFO: Pod "pvc-tester-xl2qp" satisfied condition "Succeeded or Failed"
Sep  8 04:21:47.586: INFO: Pod pvc-tester-xl2qp succeeded 
Sep  8 04:21:47.586: INFO: Deleting pod "pvc-tester-xl2qp" in namespace "pv-7243"
Sep  8 04:21:47.757: INFO: Wait up to 5m0s for pod "pvc-tester-xl2qp" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep  8 04:21:47.917: INFO: Deleting PVC pvc-8gz2n to trigger reclamation of PV nfs-v2hnz
Sep  8 04:21:47.917: INFO: Deleting PersistentVolumeClaim "pvc-8gz2n"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":11,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:59.547: INFO: Only supported for providers [gce gke] (not aws)
... skipping 71 lines ...
Sep  8 04:21:28.381: INFO: PersistentVolume nfs-pmzgs found and phase=Bound (158.488421ms)
Sep  8 04:21:28.540: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-d8cb2] to have phase Bound
Sep  8 04:21:28.698: INFO: PersistentVolumeClaim pvc-d8cb2 found and phase=Bound (158.550536ms)
STEP: Checking pod has write access to PersistentVolumes
Sep  8 04:21:28.857: INFO: Creating nfs test pod
Sep  8 04:21:29.016: INFO: Pod should terminate with exitcode 0 (success)
Sep  8 04:21:29.016: INFO: Waiting up to 5m0s for pod "pvc-tester-vk9sb" in namespace "pv-9756" to be "Succeeded or Failed"
Sep  8 04:21:29.175: INFO: Pod "pvc-tester-vk9sb": Phase="Pending", Reason="", readiness=false. Elapsed: 158.481826ms
Sep  8 04:21:31.335: INFO: Pod "pvc-tester-vk9sb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.318113967s
STEP: Saw pod success
Sep  8 04:21:31.335: INFO: Pod "pvc-tester-vk9sb" satisfied condition "Succeeded or Failed"
Sep  8 04:21:31.335: INFO: Pod pvc-tester-vk9sb succeeded 
Sep  8 04:21:31.335: INFO: Deleting pod "pvc-tester-vk9sb" in namespace "pv-9756"
Sep  8 04:21:31.516: INFO: Wait up to 5m0s for pod "pvc-tester-vk9sb" to be fully deleted
Sep  8 04:21:31.835: INFO: Creating nfs test pod
Sep  8 04:21:31.995: INFO: Pod should terminate with exitcode 0 (success)
Sep  8 04:21:31.995: INFO: Waiting up to 5m0s for pod "pvc-tester-rcrsc" in namespace "pv-9756" to be "Succeeded or Failed"
Sep  8 04:21:32.154: INFO: Pod "pvc-tester-rcrsc": Phase="Pending", Reason="", readiness=false. Elapsed: 158.503287ms
Sep  8 04:21:34.319: INFO: Pod "pvc-tester-rcrsc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323668936s
STEP: Saw pod success
Sep  8 04:21:34.319: INFO: Pod "pvc-tester-rcrsc" satisfied condition "Succeeded or Failed"
Sep  8 04:21:34.319: INFO: Pod pvc-tester-rcrsc succeeded 
Sep  8 04:21:34.319: INFO: Deleting pod "pvc-tester-rcrsc" in namespace "pv-9756"
Sep  8 04:21:34.487: INFO: Wait up to 5m0s for pod "pvc-tester-rcrsc" to be fully deleted
Sep  8 04:21:34.884: INFO: Creating nfs test pod
Sep  8 04:21:35.045: INFO: Pod should terminate with exitcode 0 (success)
Sep  8 04:21:35.045: INFO: Waiting up to 5m0s for pod "pvc-tester-l9scv" in namespace "pv-9756" to be "Succeeded or Failed"
Sep  8 04:21:35.209: INFO: Pod "pvc-tester-l9scv": Phase="Pending", Reason="", readiness=false. Elapsed: 163.161328ms
Sep  8 04:21:37.369: INFO: Pod "pvc-tester-l9scv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32371766s
Sep  8 04:21:39.529: INFO: Pod "pvc-tester-l9scv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483106247s
Sep  8 04:21:41.688: INFO: Pod "pvc-tester-l9scv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642410488s
Sep  8 04:21:43.848: INFO: Pod "pvc-tester-l9scv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.802237478s
STEP: Saw pod success
Sep  8 04:21:43.848: INFO: Pod "pvc-tester-l9scv" satisfied condition "Succeeded or Failed"
Sep  8 04:21:43.848: INFO: Pod pvc-tester-l9scv succeeded 
Sep  8 04:21:43.848: INFO: Deleting pod "pvc-tester-l9scv" in namespace "pv-9756"
Sep  8 04:21:44.014: INFO: Wait up to 5m0s for pod "pvc-tester-l9scv" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Sep  8 04:21:44.491: INFO: Deleting PVC pvc-d8cb2 to trigger reclamation of PV nfs-pmzgs
Sep  8 04:21:44.491: INFO: Deleting PersistentVolumeClaim "pvc-d8cb2"
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":67,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:21:16.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":67,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:21:59.973: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":4,"skipped":14,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:57.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 38 lines ...
Sep  8 04:21:04.696: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2239
Sep  8 04:21:04.853: INFO: creating *v1.StatefulSet: csi-mock-volumes-2239-3152/csi-mockplugin-attacher
Sep  8 04:21:05.010: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2239"
Sep  8 04:21:05.167: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2239 to register on node ip-172-20-48-118.ap-northeast-2.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Sep  8 04:21:19.433: INFO: Error getting logs for pod inline-volume-78z4w: the server rejected our request for an unknown reason (get pods inline-volume-78z4w)
Sep  8 04:21:19.590: INFO: Deleting pod "inline-volume-78z4w" in namespace "csi-mock-volumes-2239"
Sep  8 04:21:19.747: INFO: Wait up to 5m0s for pod "inline-volume-78z4w" to be fully deleted
STEP: Deleting the previously created pod
Sep  8 04:21:24.061: INFO: Deleting pod "pvc-volume-tester-rn79c" in namespace "csi-mock-volumes-2239"
Sep  8 04:21:24.218: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rn79c" to be fully deleted
STEP: Checking CSI driver logs
Sep  8 04:21:38.701: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2239
Sep  8 04:21:38.701: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 8892645c-371a-40c2-adc2-20ef0a083040
Sep  8 04:21:38.701: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep  8 04:21:38.701: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Sep  8 04:21:38.701: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-rn79c
Sep  8 04:21:38.701: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-5aff501c68f0c4a0d32f033343120fe2589fbf40cdf9782e42bd7b489c314982","target_path":"/var/lib/kubelet/pods/8892645c-371a-40c2-adc2-20ef0a083040/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-rn79c
Sep  8 04:21:38.701: INFO: Deleting pod "pvc-volume-tester-rn79c" in namespace "csi-mock-volumes-2239"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-2239
STEP: Waiting for namespaces [csi-mock-volumes-2239] to vanish
STEP: uninstalling csi mock driver
... skipping 50 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep  8 04:22:01.112: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7" in namespace "security-context-test-2615" to be "Succeeded or Failed"
Sep  8 04:22:01.277: INFO: Pod "busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7": Phase="Pending", Reason="", readiness=false. Elapsed: 165.517262ms
Sep  8 04:22:03.446: INFO: Pod "busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334281278s
Sep  8 04:22:05.611: INFO: Pod "busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499255525s
Sep  8 04:22:07.776: INFO: Pod "busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.664109404s
Sep  8 04:22:07.776: INFO: Pod "busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7" satisfied condition "Succeeded or Failed"
Sep  8 04:22:07.943: INFO: Got logs for pod "busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:07.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2615" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":96,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:08.303: INFO: Driver local doesn't support ext4 -- skipping
... skipping 64 lines ...
• [SLOW TEST:256.181 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:09.582: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:20:43.611: INFO: >>> kubeConfig: /root/.kube/config
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":7,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:11.113: INFO: Only supported for providers [openstack] (not aws)
... skipping 83 lines ...
Sep  8 04:21:08.920: INFO: PersistentVolumeClaim csi-hostpathqjl2t found but phase is Pending instead of Bound.
Sep  8 04:21:11.078: INFO: PersistentVolumeClaim csi-hostpathqjl2t found but phase is Pending instead of Bound.
Sep  8 04:21:13.237: INFO: PersistentVolumeClaim csi-hostpathqjl2t found but phase is Pending instead of Bound.
Sep  8 04:21:15.396: INFO: PersistentVolumeClaim csi-hostpathqjl2t found and phase=Bound (21.84699059s)
STEP: Expanding non-expandable pvc
Sep  8 04:21:15.711: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep  8 04:21:16.030: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:18.347: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:20.346: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:22.347: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:24.348: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:26.346: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:28.346: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:30.349: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:32.350: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:34.369: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:36.347: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:38.346: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:40.346: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:42.347: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:44.347: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:46.348: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep  8 04:21:46.682: INFO: Error updating pvc csi-hostpathqjl2t: persistentvolumeclaims "csi-hostpathqjl2t" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Sep  8 04:21:46.683: INFO: Deleting PersistentVolumeClaim "csi-hostpathqjl2t"
Sep  8 04:21:46.847: INFO: Waiting up to 5m0s for PersistentVolume pvc-15ad094a-1e35-4a96-9d78-91521c51811b to get deleted
Sep  8 04:21:47.049: INFO: PersistentVolume pvc-15ad094a-1e35-4a96-9d78-91521c51811b was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-6708
... skipping 57 lines ...
Sep  8 04:21:50.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep  8 04:21:51.333: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:21:51.650: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6625" in namespace "provisioning-6625" to be "Succeeded or Failed"
Sep  8 04:21:51.807: INFO: Pod "hostpath-symlink-prep-provisioning-6625": Phase="Pending", Reason="", readiness=false. Elapsed: 157.048127ms
Sep  8 04:21:53.964: INFO: Pod "hostpath-symlink-prep-provisioning-6625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313740915s
Sep  8 04:21:56.122: INFO: Pod "hostpath-symlink-prep-provisioning-6625": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47127545s
Sep  8 04:21:58.279: INFO: Pod "hostpath-symlink-prep-provisioning-6625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629082009s
STEP: Saw pod success
Sep  8 04:21:58.280: INFO: Pod "hostpath-symlink-prep-provisioning-6625" satisfied condition "Succeeded or Failed"
Sep  8 04:21:58.280: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6625" in namespace "provisioning-6625"
Sep  8 04:21:58.474: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6625" to be fully deleted
Sep  8 04:21:58.645: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-mvp8
STEP: Creating a pod to test subpath
Sep  8 04:21:58.823: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-mvp8" in namespace "provisioning-6625" to be "Succeeded or Failed"
Sep  8 04:21:58.980: INFO: Pod "pod-subpath-test-inlinevolume-mvp8": Phase="Pending", Reason="", readiness=false. Elapsed: 156.55211ms
Sep  8 04:22:01.142: INFO: Pod "pod-subpath-test-inlinevolume-mvp8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318680716s
Sep  8 04:22:03.300: INFO: Pod "pod-subpath-test-inlinevolume-mvp8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476651811s
Sep  8 04:22:05.457: INFO: Pod "pod-subpath-test-inlinevolume-mvp8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.633725032s
STEP: Saw pod success
Sep  8 04:22:05.457: INFO: Pod "pod-subpath-test-inlinevolume-mvp8" satisfied condition "Succeeded or Failed"
Sep  8 04:22:05.614: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-mvp8 container test-container-subpath-inlinevolume-mvp8: <nil>
STEP: delete the pod
Sep  8 04:22:05.937: INFO: Waiting for pod pod-subpath-test-inlinevolume-mvp8 to disappear
Sep  8 04:22:06.094: INFO: Pod pod-subpath-test-inlinevolume-mvp8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-mvp8
Sep  8 04:22:06.094: INFO: Deleting pod "pod-subpath-test-inlinevolume-mvp8" in namespace "provisioning-6625"
STEP: Deleting pod
Sep  8 04:22:06.263: INFO: Deleting pod "pod-subpath-test-inlinevolume-mvp8" in namespace "provisioning-6625"
Sep  8 04:22:06.577: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6625" in namespace "provisioning-6625" to be "Succeeded or Failed"
Sep  8 04:22:06.735: INFO: Pod "hostpath-symlink-prep-provisioning-6625": Phase="Pending", Reason="", readiness=false. Elapsed: 158.529672ms
Sep  8 04:22:08.893: INFO: Pod "hostpath-symlink-prep-provisioning-6625": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316195167s
Sep  8 04:22:11.050: INFO: Pod "hostpath-symlink-prep-provisioning-6625": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.473529795s
STEP: Saw pod success
Sep  8 04:22:11.050: INFO: Pod "hostpath-symlink-prep-provisioning-6625" satisfied condition "Succeeded or Failed"
Sep  8 04:22:11.050: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6625" in namespace "provisioning-6625"
Sep  8 04:22:11.228: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6625" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:11.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6625" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":10,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:11.737: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":49,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:08.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-72388788-6ad6-433e-9b01-a4ccbe5f1616
STEP: Creating a pod to test consume secrets
Sep  8 04:22:09.487: INFO: Waiting up to 5m0s for pod "pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2" in namespace "secrets-1396" to be "Succeeded or Failed"
Sep  8 04:22:09.653: INFO: Pod "pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2": Phase="Pending", Reason="", readiness=false. Elapsed: 165.742292ms
Sep  8 04:22:11.818: INFO: Pod "pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330897707s
Sep  8 04:22:13.983: INFO: Pod "pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.496020834s
Sep  8 04:22:16.152: INFO: Pod "pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.665008205s
STEP: Saw pod success
Sep  8 04:22:16.152: INFO: Pod "pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2" satisfied condition "Succeeded or Failed"
Sep  8 04:22:16.316: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2 container secret-volume-test: <nil>
STEP: delete the pod
Sep  8 04:22:16.654: INFO: Waiting for pod pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2 to disappear
Sep  8 04:22:16.819: INFO: Pod pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.818 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":103,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:17.165: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
• [SLOW TEST:8.710 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":9,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
Sep  8 04:20:49.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
W0908 04:20:50.268703    4885 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-3360" for this suite.


• [SLOW TEST:92.547 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":7,"skipped":40,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":11,"skipped":46,"failed":0}
[BeforeEach] [sig-node] AppArmor
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:21.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apparmor
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 147 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:24.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-1507" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":12,"skipped":61,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:02.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:23.312 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":7,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:26.226: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:26.383: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 241 lines ...
Sep  8 04:22:08.118: INFO: PersistentVolumeClaim pvc-nbxxd found but phase is Pending instead of Bound.
Sep  8 04:22:10.276: INFO: PersistentVolumeClaim pvc-nbxxd found and phase=Bound (10.949933401s)
Sep  8 04:22:10.276: INFO: Waiting up to 3m0s for PersistentVolume local-hwntj to have phase Bound
Sep  8 04:22:10.434: INFO: PersistentVolume local-hwntj found and phase=Bound (157.485987ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6f9d
STEP: Creating a pod to test subpath
Sep  8 04:22:10.911: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6f9d" in namespace "provisioning-4166" to be "Succeeded or Failed"
Sep  8 04:22:11.068: INFO: Pod "pod-subpath-test-preprovisionedpv-6f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 157.437151ms
Sep  8 04:22:13.227: INFO: Pod "pod-subpath-test-preprovisionedpv-6f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316259914s
Sep  8 04:22:15.385: INFO: Pod "pod-subpath-test-preprovisionedpv-6f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474067643s
Sep  8 04:22:17.543: INFO: Pod "pod-subpath-test-preprovisionedpv-6f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.632398797s
Sep  8 04:22:19.702: INFO: Pod "pod-subpath-test-preprovisionedpv-6f9d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.791547518s
Sep  8 04:22:21.861: INFO: Pod "pod-subpath-test-preprovisionedpv-6f9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.949921794s
STEP: Saw pod success
Sep  8 04:22:21.861: INFO: Pod "pod-subpath-test-preprovisionedpv-6f9d" satisfied condition "Succeeded or Failed"
Sep  8 04:22:22.018: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-6f9d container test-container-subpath-preprovisionedpv-6f9d: <nil>
STEP: delete the pod
Sep  8 04:22:22.347: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6f9d to disappear
Sep  8 04:22:22.505: INFO: Pod pod-subpath-test-preprovisionedpv-6f9d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6f9d
Sep  8 04:22:22.505: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6f9d" in namespace "provisioning-4166"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Sep  8 04:22:09.055: INFO: PersistentVolumeClaim pvc-9zc6s found but phase is Pending instead of Bound.
Sep  8 04:22:11.225: INFO: PersistentVolumeClaim pvc-9zc6s found and phase=Bound (8.813574627s)
Sep  8 04:22:11.225: INFO: Waiting up to 3m0s for PersistentVolume local-cs2zt to have phase Bound
Sep  8 04:22:11.409: INFO: PersistentVolume local-cs2zt found and phase=Bound (183.9618ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kkqw
STEP: Creating a pod to test subpath
Sep  8 04:22:11.891: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kkqw" in namespace "provisioning-9591" to be "Succeeded or Failed"
Sep  8 04:22:12.049: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw": Phase="Pending", Reason="", readiness=false. Elapsed: 157.731552ms
Sep  8 04:22:14.208: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316487605s
Sep  8 04:22:16.367: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47543651s
Sep  8 04:22:18.526: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.634295245s
STEP: Saw pod success
Sep  8 04:22:18.526: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw" satisfied condition "Succeeded or Failed"
Sep  8 04:22:18.685: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-kkqw container test-container-subpath-preprovisionedpv-kkqw: <nil>
STEP: delete the pod
Sep  8 04:22:19.021: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kkqw to disappear
Sep  8 04:22:19.178: INFO: Pod pod-subpath-test-preprovisionedpv-kkqw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kkqw
Sep  8 04:22:19.178: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kkqw" in namespace "provisioning-9591"
STEP: Creating pod pod-subpath-test-preprovisionedpv-kkqw
STEP: Creating a pod to test subpath
Sep  8 04:22:19.493: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kkqw" in namespace "provisioning-9591" to be "Succeeded or Failed"
Sep  8 04:22:19.650: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw": Phase="Pending", Reason="", readiness=false. Elapsed: 157.364626ms
Sep  8 04:22:21.809: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316022638s
STEP: Saw pod success
Sep  8 04:22:21.809: INFO: Pod "pod-subpath-test-preprovisionedpv-kkqw" satisfied condition "Succeeded or Failed"
Sep  8 04:22:21.966: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-kkqw container test-container-subpath-preprovisionedpv-kkqw: <nil>
STEP: delete the pod
Sep  8 04:22:22.291: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kkqw to disappear
Sep  8 04:22:22.448: INFO: Pod pod-subpath-test-preprovisionedpv-kkqw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kkqw
Sep  8 04:22:22.448: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kkqw" in namespace "provisioning-9591"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:399
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:28.002: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 372 lines ...
• [SLOW TEST:9.866 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":10,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:30.362: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 52 lines ...
• [SLOW TEST:20.842 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:26.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-4c5e3179-399b-4e60-b37f-50765618e18e
STEP: Creating a pod to test consume configMaps
Sep  8 04:22:27.387: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3" in namespace "projected-4199" to be "Succeeded or Failed"
Sep  8 04:22:27.543: INFO: Pod "pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3": Phase="Pending", Reason="", readiness=false. Elapsed: 156.236384ms
Sep  8 04:22:29.704: INFO: Pod "pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316886053s
STEP: Saw pod success
Sep  8 04:22:29.704: INFO: Pod "pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3" satisfied condition "Succeeded or Failed"
Sep  8 04:22:29.860: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3 container agnhost-container: <nil>
STEP: delete the pod
Sep  8 04:22:30.181: INFO: Waiting for pod pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3 to disappear
Sep  8 04:22:30.337: INFO: Pod pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:30.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4199" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:30.669: INFO: Only supported for providers [gce gke] (not aws)
... skipping 81 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-2b583d06-195d-42eb-a4ec-e71c9e5dc686
STEP: Creating a pod to test consume secrets
Sep  8 04:22:26.019: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5" in namespace "projected-103" to be "Succeeded or Failed"
Sep  8 04:22:26.180: INFO: Pod "pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5": Phase="Pending", Reason="", readiness=false. Elapsed: 160.994376ms
Sep  8 04:22:28.345: INFO: Pod "pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325987153s
Sep  8 04:22:30.507: INFO: Pod "pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.48782242s
STEP: Saw pod success
Sep  8 04:22:30.507: INFO: Pod "pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5" satisfied condition "Succeeded or Failed"
Sep  8 04:22:30.669: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep  8 04:22:31.006: INFO: Waiting for pod pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5 to disappear
Sep  8 04:22:31.166: INFO: Pod pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.673 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:31.518: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":11,"skipped":82,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:21:29.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:62.941 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":12,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:32.027: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
STEP: Destroying namespace "apply-7088" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":9,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:32.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9618" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":14,"skipped":72,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 117 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":10,"skipped":113,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:33.079: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
Sep  8 04:22:28.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  8 04:22:29.699: INFO: Waiting up to 5m0s for pod "pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f" in namespace "emptydir-648" to be "Succeeded or Failed"
Sep  8 04:22:29.855: INFO: Pod "pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 155.837059ms
Sep  8 04:22:32.012: INFO: Pod "pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312876085s
Sep  8 04:22:34.174: INFO: Pod "pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.474633184s
STEP: Saw pod success
Sep  8 04:22:34.174: INFO: Pod "pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f" satisfied condition "Succeeded or Failed"
Sep  8 04:22:34.330: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f container test-container: <nil>
STEP: delete the pod
Sep  8 04:22:34.653: INFO: Waiting for pod pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f to disappear
Sep  8 04:22:34.815: INFO: Pod pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.385 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:35.155: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Sep  8 04:22:26.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep  8 04:22:27.577: INFO: Waiting up to 5m0s for pod "pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf" in namespace "emptydir-6702" to be "Succeeded or Failed"
Sep  8 04:22:27.735: INFO: Pod "pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf": Phase="Pending", Reason="", readiness=false. Elapsed: 157.87989ms
Sep  8 04:22:29.896: INFO: Pod "pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318907385s
Sep  8 04:22:32.054: INFO: Pod "pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47729412s
Sep  8 04:22:34.213: INFO: Pod "pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.635703311s
STEP: Saw pod success
Sep  8 04:22:34.213: INFO: Pod "pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf" satisfied condition "Succeeded or Failed"
Sep  8 04:22:34.371: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf container test-container: <nil>
STEP: delete the pod
Sep  8 04:22:34.695: INFO: Waiting for pod pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf to disappear
Sep  8 04:22:34.852: INFO: Pod pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 21 lines ...
• [SLOW TEST:120.583 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:221
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":9,"skipped":101,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
• [SLOW TEST:38.837 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:130
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":5,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:37.259: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 143 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":12,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:42.462: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep  8 04:22:31.167: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep  8 04:22:31.167: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-6w5l
STEP: Creating a pod to test subpath
Sep  8 04:22:31.329: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-6w5l" in namespace "provisioning-5951" to be "Succeeded or Failed"
Sep  8 04:22:31.487: INFO: Pod "pod-subpath-test-inlinevolume-6w5l": Phase="Pending", Reason="", readiness=false. Elapsed: 157.565368ms
Sep  8 04:22:33.648: INFO: Pod "pod-subpath-test-inlinevolume-6w5l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318726862s
Sep  8 04:22:35.808: INFO: Pod "pod-subpath-test-inlinevolume-6w5l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478759882s
Sep  8 04:22:37.966: INFO: Pod "pod-subpath-test-inlinevolume-6w5l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637008598s
Sep  8 04:22:40.124: INFO: Pod "pod-subpath-test-inlinevolume-6w5l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.794722092s
Sep  8 04:22:42.286: INFO: Pod "pod-subpath-test-inlinevolume-6w5l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.956826317s
STEP: Saw pod success
Sep  8 04:22:42.286: INFO: Pod "pod-subpath-test-inlinevolume-6w5l" satisfied condition "Succeeded or Failed"
Sep  8 04:22:42.447: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-6w5l container test-container-volume-inlinevolume-6w5l: <nil>
STEP: delete the pod
Sep  8 04:22:42.786: INFO: Waiting for pod pod-subpath-test-inlinevolume-6w5l to disappear
Sep  8 04:22:42.942: INFO: Pod pod-subpath-test-inlinevolume-6w5l no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-6w5l
Sep  8 04:22:42.942: INFO: Deleting pod "pod-subpath-test-inlinevolume-6w5l" in namespace "provisioning-5951"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:43.589: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":8,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:21:59.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Sep  8 04:22:00.700: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:22:01.042: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-9396" in namespace "volume-9396" to be "Succeeded or Failed"
Sep  8 04:22:01.201: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 159.263527ms
Sep  8 04:22:03.362: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320159129s
Sep  8 04:22:05.521: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479734188s
Sep  8 04:22:07.682: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.640001155s
STEP: Saw pod success
Sep  8 04:22:07.682: INFO: Pod "hostpath-symlink-prep-volume-9396" satisfied condition "Succeeded or Failed"
Sep  8 04:22:07.682: INFO: Deleting pod "hostpath-symlink-prep-volume-9396" in namespace "volume-9396"
Sep  8 04:22:07.845: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-9396" to be fully deleted
Sep  8 04:22:08.004: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Sep  8 04:22:10.485: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-9396 exec hostpathsymlink-injector --namespace=volume-9396 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-9396' > /opt/0/index.html'
... skipping 32 lines ...
Sep  8 04:22:31.181: INFO: Pod hostpathsymlink-client still exists
Sep  8 04:22:33.022: INFO: Waiting for pod hostpathsymlink-client to disappear
Sep  8 04:22:33.186: INFO: Pod hostpathsymlink-client still exists
Sep  8 04:22:35.022: INFO: Waiting for pod hostpathsymlink-client to disappear
Sep  8 04:22:35.183: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Sep  8 04:22:35.355: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-9396" in namespace "volume-9396" to be "Succeeded or Failed"
Sep  8 04:22:35.515: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 159.685527ms
Sep  8 04:22:37.674: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319122889s
Sep  8 04:22:39.835: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479535067s
Sep  8 04:22:41.995: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63958724s
Sep  8 04:22:44.161: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80611541s
Sep  8 04:22:46.324: INFO: Pod "hostpath-symlink-prep-volume-9396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.968727019s
STEP: Saw pod success
Sep  8 04:22:46.324: INFO: Pod "hostpath-symlink-prep-volume-9396" satisfied condition "Succeeded or Failed"
Sep  8 04:22:46.324: INFO: Deleting pod "hostpath-symlink-prep-volume-9396" in namespace "volume-9396"
Sep  8 04:22:46.489: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-9396" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:46.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9396" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":9,"skipped":58,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:46.998: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":11,"skipped":79,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:31.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:15.917 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":79,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:47.036: INFO: Only supported for providers [azure] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:47.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-5604" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":12,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:47.735: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:48.064: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:16.852 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":11,"skipped":114,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:49.955: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 10 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 151 lines ...
Sep  8 04:22:06.196: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-bn4st] to have phase Bound
Sep  8 04:22:06.355: INFO: PersistentVolumeClaim pvc-bn4st found and phase=Bound (159.04547ms)
STEP: Deleting the previously created pod
Sep  8 04:22:15.185: INFO: Deleting pod "pvc-volume-tester-2br4l" in namespace "csi-mock-volumes-6402"
Sep  8 04:22:15.347: INFO: Wait up to 5m0s for pod "pvc-volume-tester-2br4l" to be fully deleted
STEP: Checking CSI driver logs
Sep  8 04:22:27.830: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/67f0c1ce-ec4f-4923-a25c-846eaf0325dd/volumes/kubernetes.io~csi/pvc-48754637-7feb-4cfd-926f-7a250390a138/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-2br4l
Sep  8 04:22:27.830: INFO: Deleting pod "pvc-volume-tester-2br4l" in namespace "csi-mock-volumes-6402"
STEP: Deleting claim pvc-bn4st
Sep  8 04:22:28.308: INFO: Waiting up to 2m0s for PersistentVolume pvc-48754637-7feb-4cfd-926f-7a250390a138 to get deleted
Sep  8 04:22:28.469: INFO: PersistentVolume pvc-48754637-7feb-4cfd-926f-7a250390a138 was removed
STEP: Deleting storageclass csi-mock-volumes-6402-sct5qjk
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":10,"skipped":67,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":113,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:35.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
• [SLOW TEST:18.747 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:53.956: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 132 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":16,"skipped":118,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:57.367: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:22:57.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-8777" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":11,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:22:57.927: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 221 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:01.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5912" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep  8 04:22:34.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
Sep  8 04:22:34.885: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  8 04:22:35.202: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6688" in namespace "provisioning-6688" to be "Succeeded or Failed"
Sep  8 04:22:35.363: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Pending", Reason="", readiness=false. Elapsed: 160.049632ms
Sep  8 04:22:37.521: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318151673s
Sep  8 04:22:39.679: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476027824s
Sep  8 04:22:41.836: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Pending", Reason="", readiness=false. Elapsed: 6.633339293s
Sep  8 04:22:43.994: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.791025211s
STEP: Saw pod success
Sep  8 04:22:43.994: INFO: Pod "hostpath-symlink-prep-provisioning-6688" satisfied condition "Succeeded or Failed"
Sep  8 04:22:43.994: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6688" in namespace "provisioning-6688"
Sep  8 04:22:44.158: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6688" to be fully deleted
Sep  8 04:22:44.315: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zx6q
STEP: Creating a pod to test subpath
Sep  8 04:22:44.472: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zx6q" in namespace "provisioning-6688" to be "Succeeded or Failed"
Sep  8 04:22:44.629: INFO: Pod "pod-subpath-test-inlinevolume-zx6q": Phase="Pending", Reason="", readiness=false. Elapsed: 156.441777ms
Sep  8 04:22:46.785: INFO: Pod "pod-subpath-test-inlinevolume-zx6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313125947s
Sep  8 04:22:48.943: INFO: Pod "pod-subpath-test-inlinevolume-zx6q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470632002s
Sep  8 04:22:51.143: INFO: Pod "pod-subpath-test-inlinevolume-zx6q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.67020687s
Sep  8 04:22:53.303: INFO: Pod "pod-subpath-test-inlinevolume-zx6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.831070651s
STEP: Saw pod success
Sep  8 04:22:53.304: INFO: Pod "pod-subpath-test-inlinevolume-zx6q" satisfied condition "Succeeded or Failed"
Sep  8 04:22:53.462: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-zx6q container test-container-subpath-inlinevolume-zx6q: <nil>
STEP: delete the pod
Sep  8 04:22:53.880: INFO: Waiting for pod pod-subpath-test-inlinevolume-zx6q to disappear
Sep  8 04:22:54.054: INFO: Pod pod-subpath-test-inlinevolume-zx6q no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zx6q
Sep  8 04:22:54.054: INFO: Deleting pod "pod-subpath-test-inlinevolume-zx6q" in namespace "provisioning-6688"
STEP: Deleting pod
Sep  8 04:22:54.238: INFO: Deleting pod "pod-subpath-test-inlinevolume-zx6q" in namespace "provisioning-6688"
Sep  8 04:22:54.557: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6688" in namespace "provisioning-6688" to be "Succeeded or Failed"
Sep  8 04:22:54.714: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Pending", Reason="", readiness=false. Elapsed: 157.25528ms
Sep  8 04:22:56.872: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315445319s
Sep  8 04:22:59.029: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472273088s
Sep  8 04:23:01.186: INFO: Pod "hostpath-symlink-prep-provisioning-6688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.629052853s
STEP: Saw pod success
Sep  8 04:23:01.186: INFO: Pod "hostpath-symlink-prep-provisioning-6688" satisfied condition "Succeeded or Failed"
Sep  8 04:23:01.186: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6688" in namespace "provisioning-6688"
Sep  8 04:23:01.346: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6688" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:01.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6688" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:384
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:01.835: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 69 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":8,"skipped":41,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:21:45.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
STEP: Deleting pod verify-service-up-exec-pod-8gxrp in namespace services-4066
STEP: verifying service-disabled is not up
Sep  8 04:22:14.999: INFO: Creating new host exec pod
Sep  8 04:22:15.324: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep  8 04:22:17.487: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep  8 04:22:19.488: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Sep  8 04:22:19.488: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4066 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.183.7:80 && echo service-down-failed'
Sep  8 04:22:23.127: INFO: rc: 28
Sep  8 04:22:23.127: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.183.7:80 && echo service-down-failed" in pod services-4066/verify-service-down-host-exec-pod: error running /tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4066 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.183.7:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.183.7:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4066
STEP: adding service-proxy-name label
STEP: verifying service is not up
Sep  8 04:22:23.646: INFO: Creating new host exec pod
Sep  8 04:22:23.970: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep  8 04:22:26.133: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Sep  8 04:22:26.133: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4066 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.190.67:80 && echo service-down-failed'
Sep  8 04:22:29.829: INFO: rc: 28
Sep  8 04:22:29.829: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.70.190.67:80 && echo service-down-failed" in pod services-4066/verify-service-down-host-exec-pod: error running /tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4066 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.70.190.67:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.70.190.67:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4066
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Sep  8 04:22:30.322: INFO: Creating new host exec pod
... skipping 16 lines ...
STEP: Deleting pod verify-service-up-exec-pod-rfrzf in namespace services-4066
STEP: verifying service-disabled is still not up
Sep  8 04:22:53.820: INFO: Creating new host exec pod
Sep  8 04:22:54.243: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep  8 04:22:56.405: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep  8 04:22:58.410: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Sep  8 04:22:58.410: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4066 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.183.7:80 && echo service-down-failed'
Sep  8 04:23:02.701: INFO: rc: 28
Sep  8 04:23:02.701: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.64.183.7:80 && echo service-down-failed" in pod services-4066/verify-service-down-host-exec-pod: error running /tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4066 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.64.183.7:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.64.183.7:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4066
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:02.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:77.719 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1865
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":9,"skipped":41,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:03.232: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":13,"skipped":83,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:03.718: INFO: Only supported for providers [gce gke] (not aws)
... skipping 200 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Sep  8 04:23:03.528: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-5805" to be "Succeeded or Failed"
Sep  8 04:23:03.685: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 156.748275ms
Sep  8 04:23:05.842: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314054894s
Sep  8 04:23:08.001: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.473153512s
Sep  8 04:23:10.162: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.633324212s
Sep  8 04:23:10.162: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:10.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5805" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":8,"skipped":78,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:10.934: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
• [SLOW TEST:23.184 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 161 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":12,"skipped":125,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:16.570: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
SSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":14,"skipped":113,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:52.884: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Sep  8 04:23:08.421: INFO: PersistentVolumeClaim pvc-45c65 found but phase is Pending instead of Bound.
Sep  8 04:23:10.580: INFO: PersistentVolumeClaim pvc-45c65 found and phase=Bound (8.792083424s)
Sep  8 04:23:10.580: INFO: Waiting up to 3m0s for PersistentVolume local-swgnk to have phase Bound
Sep  8 04:23:10.738: INFO: PersistentVolume local-swgnk found and phase=Bound (157.731307ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-grnn
STEP: Creating a pod to test exec-volume-test
Sep  8 04:23:11.211: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-grnn" in namespace "volume-5730" to be "Succeeded or Failed"
Sep  8 04:23:11.369: INFO: Pod "exec-volume-test-preprovisionedpv-grnn": Phase="Pending", Reason="", readiness=false. Elapsed: 157.853024ms
Sep  8 04:23:13.532: INFO: Pod "exec-volume-test-preprovisionedpv-grnn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32101422s
Sep  8 04:23:15.700: INFO: Pod "exec-volume-test-preprovisionedpv-grnn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.489122056s
STEP: Saw pod success
Sep  8 04:23:15.700: INFO: Pod "exec-volume-test-preprovisionedpv-grnn" satisfied condition "Succeeded or Failed"
Sep  8 04:23:15.858: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-grnn container exec-container-preprovisionedpv-grnn: <nil>
STEP: delete the pod
Sep  8 04:23:16.182: INFO: Waiting for pod exec-volume-test-preprovisionedpv-grnn to disappear
Sep  8 04:23:16.341: INFO: Pod exec-volume-test-preprovisionedpv-grnn no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-grnn
Sep  8 04:23:16.341: INFO: Deleting pod "exec-volume-test-preprovisionedpv-grnn" in namespace "volume-5730"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":15,"skipped":113,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Sep  8 04:23:07.462: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9448 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Sep  8 04:23:09.457: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Sep  8 04:23:09.458: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9448 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Sep  8 04:23:11.347: INFO: rc: 255
Sep  8 04:23:11.347: INFO: got err error running /tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9448 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0908 04:23:11.110272     202 merged_client_builder.go:163] Using in-cluster namespace
I0908 04:23:11.110523     202 merged_client_builder.go:121] Using in-cluster configuration
I0908 04:23:11.114729     202 merged_client_builder.go:121] Using in-cluster configuration
I0908 04:23:11.118813     202 merged_client_builder.go:121] Using in-cluster configuration
I0908 04:23:11.119218     202 round_trippers.go:432] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-9448/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0908 04:23:11.125924     202 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc000334380, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3055420, 0xc000000003, 0x0, 0x0, 0xc0000a2310, 0x25f2cf0, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3055420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc00031b050, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0003cd940, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207dd80, 0xc00009ecf0, 0x1f07e70)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000482b00, 0xc0002e1c80, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Sep  8 04:23:11.347: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9448 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Sep  8 04:23:13.128: INFO: rc: 255
Sep  8 04:23:13.128: INFO: got err error running /tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9448 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0908 04:23:12.938980     215 merged_client_builder.go:163] Using in-cluster namespace
I0908 04:23:12.951320     215 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 11 milliseconds
I0908 04:23:12.951506     215 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0908 04:23:12.959853     215 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 7 milliseconds
I0908 04:23:12.959923     215 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0908 04:23:12.960156     215 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0908 04:23:12.962158     215 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0908 04:23:12.962194     215 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0908 04:23:12.965133     215 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0908 04:23:12.965166     215 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0908 04:23:12.967047     215 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0908 04:23:12.967180     215 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0908 04:23:12.967458     215 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F0908 04:23:12.967652     215 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0002d4000, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3055420, 0xc000000003, 0x0, 0x0, 0xc0005f6460, 0x25f2cf0, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3055420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc000318940, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000487e0, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207d0e0, 0xc0002e0e40, 0x1f07e70)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000974b00, 0xc0001aa510, 0x1, 0x3)
... skipping 24 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Sep  8 04:23:13.128: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9448 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Sep  8 04:23:15.067: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Sep  8 04:23:15.067: INFO: stdout: "I0908 04:23:14.922068     226 merged_client_builder.go:121] Using in-cluster configuration\nI0908 04:23:14.932675     226 merged_client_builder.go:121] Using in-cluster configuration\nI0908 04:23:14.970216     226 merged_client_builder.go:121] Using in-cluster configuration\nI0908 04:23:14.981275     226 round_trippers.go:454] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 10 milliseconds\nNo resources found in invalid namespace.\n"
Sep  8 04:23:15.067: INFO: stdout: I0908 04:23:14.922068     226 merged_client_builder.go:121] Using in-cluster configuration
... skipping 76 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":6,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:20.238: INFO: Only supported for providers [openstack] (not aws)
... skipping 62 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Sep  8 04:23:09.686: INFO: PersistentVolumeClaim pvc-tsk9b found but phase is Pending instead of Bound.
Sep  8 04:23:11.843: INFO: PersistentVolumeClaim pvc-tsk9b found and phase=Bound (15.261584741s)
Sep  8 04:23:11.844: INFO: Waiting up to 3m0s for PersistentVolume local-tkv2k to have phase Bound
Sep  8 04:23:12.001: INFO: PersistentVolume local-tkv2k found and phase=Bound (157.16183ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5tm7
STEP: Creating a pod to test subpath
Sep  8 04:23:12.472: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5tm7" in namespace "provisioning-2757" to be "Succeeded or Failed"
Sep  8 04:23:12.628: INFO: Pod "pod-subpath-test-preprovisionedpv-5tm7": Phase="Pending", Reason="", readiness=false. Elapsed: 156.155423ms
Sep  8 04:23:14.804: INFO: Pod "pod-subpath-test-preprovisionedpv-5tm7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.331992227s
Sep  8 04:23:16.966: INFO: Pod "pod-subpath-test-preprovisionedpv-5tm7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.493829491s
Sep  8 04:23:19.127: INFO: Pod "pod-subpath-test-preprovisionedpv-5tm7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655229105s
Sep  8 04:23:21.284: INFO: Pod "pod-subpath-test-preprovisionedpv-5tm7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.81185124s
STEP: Saw pod success
Sep  8 04:23:21.284: INFO: Pod "pod-subpath-test-preprovisionedpv-5tm7" satisfied condition "Succeeded or Failed"
Sep  8 04:23:21.440: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-5tm7 container test-container-subpath-preprovisionedpv-5tm7: <nil>
STEP: delete the pod
Sep  8 04:23:21.790: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5tm7 to disappear
Sep  8 04:23:21.947: INFO: Pod pod-subpath-test-preprovisionedpv-5tm7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5tm7
Sep  8 04:23:21.947: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5tm7" in namespace "provisioning-2757"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:369
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":13,"skipped":70,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:22:28.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
Sep  8 04:22:41.219: INFO: PersistentVolumeClaim pvc-j2bkf found and phase=Bound (163.497131ms)
STEP: Deleting the previously created pod
Sep  8 04:22:55.044: INFO: Deleting pod "pvc-volume-tester-xvpcc" in namespace "csi-mock-volumes-1589"
Sep  8 04:22:55.201: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xvpcc" to be fully deleted
STEP: Checking CSI driver logs
Sep  8 04:23:05.717: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6Imtvd29hempZajRWUjRKYkNVX3hTSnRXa0hQbFYwYmd1blhuZXpMMXg0TjAifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MzEwNzU1NzAsImlhdCI6MTYzMTA3NDk3MCwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLTM1ZWM5MWFiMjAtMWUxYjUudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtMTU4OSIsInBvZCI6eyJuYW1lIjoicHZjLXZvbHVtZS10ZXN0ZXIteHZwY2MiLCJ1aWQiOiI1NDQ0N2E1OC01YTdhLTQ2MzYtYWQ2ZC1hOWJjYWY5ZWE4NmMifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiI3NDQ1MTViZS04MzQ2LTQ5MDktYjM3ZS00Yzc0ZTYwZmY0MjgifX0sIm5iZiI6MTYzMTA3NDk3MCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNzaS1tb2NrLXZvbHVtZXMtMTU4OTpkZWZhdWx0In0.jPKsn7K4odJJr7b6wPK7zqjeYpWRnz0uA1htQ98wQEaE9B_kHRGU8O7BARtKDM-OCUsqDuCcBTfMnXhG9D7wPNg5WN7SGkpGWlEj_wQxEOoujKddPg9KjRQTiM07M8q39_FdpvqRdRTfI7xPmpyDOPN5Ll7cUNKnZSe8WCsBWzoY--N_26uo-VRSLNP1QQv197XI33_CzzLdI_4hWmbp4mZGeDExw-fbvkq4uqJHQnFDxkc-_3RaovUKJPzWinwv3_asOayXnZ4ARw6YZt7JCEwGT7i4uahxIO0CXa2fh--W_gQMrPLXVUIddzoJ6jEiabG_tetgmONmm6Mso0PQBQ","expirationTimestamp":"2021-09-08T04:32:50Z"}}
Sep  8 04:23:05.717: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/54447a58-5a7a-4636-ad6d-a9bcaf9ea86c/volumes/kubernetes.io~csi/pvc-76c6a74d-d915-451e-bd49-5b53042c94c5/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-xvpcc
Sep  8 04:23:05.717: INFO: Deleting pod "pvc-volume-tester-xvpcc" in namespace "csi-mock-volumes-1589"
STEP: Deleting claim pvc-j2bkf
Sep  8 04:23:06.188: INFO: Waiting up to 2m0s for PersistentVolume pvc-76c6a74d-d915-451e-bd49-5b53042c94c5 to get deleted
Sep  8 04:23:06.344: INFO: PersistentVolume pvc-76c6a74d-d915-451e-bd49-5b53042c94c5 found and phase=Released (156.088537ms)
Sep  8 04:23:08.507: INFO: PersistentVolume pvc-76c6a74d-d915-451e-bd49-5b53042c94c5 was removed
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:27.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-3436" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":14,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":13,"skipped":132,"failed":1,"failures":["[sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:27.670: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":78,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:23:26.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep  8 04:23:27.589: INFO: Waiting up to 5m0s for pod "pod-8d087956-2013-44dc-951a-a0bdf3ee2a33" in namespace "emptydir-3728" to be "Succeeded or Failed"
Sep  8 04:23:27.745: INFO: Pod "pod-8d087956-2013-44dc-951a-a0bdf3ee2a33": Phase="Pending", Reason="", readiness=false. Elapsed: 155.967005ms
Sep  8 04:23:29.901: INFO: Pod "pod-8d087956-2013-44dc-951a-a0bdf3ee2a33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311932019s
Sep  8 04:23:32.059: INFO: Pod "pod-8d087956-2013-44dc-951a-a0bdf3ee2a33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470197855s
Sep  8 04:23:34.219: INFO: Pod "pod-8d087956-2013-44dc-951a-a0bdf3ee2a33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.630071055s
STEP: Saw pod success
Sep  8 04:23:34.219: INFO: Pod "pod-8d087956-2013-44dc-951a-a0bdf3ee2a33" satisfied condition "Succeeded or Failed"
Sep  8 04:23:34.375: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-8d087956-2013-44dc-951a-a0bdf3ee2a33 container test-container: <nil>
STEP: delete the pod
Sep  8 04:23:34.700: INFO: Waiting for pod pod-8d087956-2013-44dc-951a-a0bdf3ee2a33 to disappear
Sep  8 04:23:34.858: INFO: Pod pod-8d087956-2013-44dc-951a-a0bdf3ee2a33 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":4,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:35.189: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 107 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":10,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:35.749: INFO: Only supported for providers [azure] (not aws)
... skipping 68 lines ...
Sep  8 04:22:46.243: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-gvjhf] to have phase Bound
Sep  8 04:22:46.404: INFO: PersistentVolumeClaim pvc-gvjhf found and phase=Bound (160.8108ms)
STEP: Deleting the previously created pod
Sep  8 04:22:59.220: INFO: Deleting pod "pvc-volume-tester-h6tvg" in namespace "csi-mock-volumes-336"
Sep  8 04:22:59.384: INFO: Wait up to 5m0s for pod "pvc-volume-tester-h6tvg" to be fully deleted
STEP: Checking CSI driver logs
Sep  8 04:23:02.304: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/328d0feb-f8fa-4cf6-b045-62be2495f77c/volumes/kubernetes.io~csi/pvc-e1364e87-5eec-4b0a-95f4-80945a14c261/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-h6tvg
Sep  8 04:23:02.304: INFO: Deleting pod "pvc-volume-tester-h6tvg" in namespace "csi-mock-volumes-336"
STEP: Deleting claim pvc-gvjhf
Sep  8 04:23:02.787: INFO: Waiting up to 2m0s for PersistentVolume pvc-e1364e87-5eec-4b0a-95f4-80945a14c261 to get deleted
Sep  8 04:23:02.949: INFO: PersistentVolume pvc-e1364e87-5eec-4b0a-95f4-80945a14c261 found and phase=Released (161.480234ms)
Sep  8 04:23:05.110: INFO: PersistentVolume pvc-e1364e87-5eec-4b0a-95f4-80945a14c261 found and phase=Released (2.322507182s)
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":15,"skipped":83,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:36.464: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":14,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:39.181: INFO: Only supported for providers [gce gke] (not aws)
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:39.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-7217" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":16,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:39.640: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Sep  8 04:23:28.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep  8 04:23:29.772: INFO: Waiting up to 5m0s for pod "pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178" in namespace "emptydir-9406" to be "Succeeded or Failed"
Sep  8 04:23:29.928: INFO: Pod "pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178": Phase="Pending", Reason="", readiness=false. Elapsed: 155.809485ms
Sep  8 04:23:32.085: INFO: Pod "pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31272452s
Sep  8 04:23:34.242: INFO: Pod "pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46960439s
Sep  8 04:23:36.398: INFO: Pod "pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178": Phase="Pending", Reason="", readiness=false. Elapsed: 6.625940618s
Sep  8 04:23:38.556: INFO: Pod "pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.783544559s
STEP: Saw pod success
Sep  8 04:23:38.556: INFO: Pod "pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178" satisfied condition "Succeeded or Failed"
Sep  8 04:23:38.712: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178 container test-container: <nil>
STEP: delete the pod
Sep  8 04:23:39.110: INFO: Waiting for pod pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178 to disappear
Sep  8 04:23:39.292: INFO: Pod pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.937 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":81,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:42.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3995" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":11,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Sep  8 04:23:22.639: INFO: PersistentVolumeClaim pvc-6x7cr found but phase is Pending instead of Bound.
Sep  8 04:23:24.803: INFO: PersistentVolumeClaim pvc-6x7cr found and phase=Bound (6.653334161s)
Sep  8 04:23:24.803: INFO: Waiting up to 3m0s for PersistentVolume local-sx9xs to have phase Bound
Sep  8 04:23:24.958: INFO: PersistentVolume local-sx9xs found and phase=Bound (154.932476ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6gnm
STEP: Creating a pod to test subpath
Sep  8 04:23:25.441: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6gnm" in namespace "provisioning-562" to be "Succeeded or Failed"
Sep  8 04:23:25.615: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm": Phase="Pending", Reason="", readiness=false. Elapsed: 173.165665ms
Sep  8 04:23:27.775: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334084077s
Sep  8 04:23:29.930: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.48889527s
Sep  8 04:23:32.087: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64554447s
Sep  8 04:23:34.242: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.800519664s
Sep  8 04:23:36.398: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.956348587s
Sep  8 04:23:38.554: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.11294788s
STEP: Saw pod success
Sep  8 04:23:38.554: INFO: Pod "pod-subpath-test-preprovisionedpv-6gnm" satisfied condition "Succeeded or Failed"
Sep  8 04:23:38.709: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-6gnm container test-container-volume-preprovisionedpv-6gnm: <nil>
STEP: delete the pod
Sep  8 04:23:39.111: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6gnm to disappear
Sep  8 04:23:39.294: INFO: Pod pod-subpath-test-preprovisionedpv-6gnm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6gnm
Sep  8 04:23:39.294: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6gnm" in namespace "provisioning-562"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":14,"skipped":126,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:44.298: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 42 lines ...
Sep  8 04:23:36.703: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8809 explain e2e-test-crd-publish-openapi-1500-crds.spec'
Sep  8 04:23:37.444: INFO: stderr: ""
Sep  8 04:23:37.444: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1500-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep  8 04:23:37.444: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8809 explain e2e-test-crd-publish-openapi-1500-crds.spec.bars'
Sep  8 04:23:38.178: INFO: stderr: ""
Sep  8 04:23:38.178: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1500-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep  8 04:23:38.178: INFO: Running '/tmp/kubectl2875427005/kubectl --server=https://api.e2e-35ec91ab20-1e1b5.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8809 explain e2e-test-crd-publish-openapi-1500-crds.spec.bars2'
Sep  8 04:23:38.916: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:45.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8809" for this suite.
... skipping 2 lines ...
• [SLOW TEST:25.762 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":7,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:46.028: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 192 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":12,"skipped":74,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:48.995: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Sep  8 04:23:44.183: INFO: Waiting up to 5m0s for pod "pod-12333b09-35e7-45f2-a08a-92c61b5901ab" in namespace "emptydir-9059" to be "Succeeded or Failed"
Sep  8 04:23:44.339: INFO: Pod "pod-12333b09-35e7-45f2-a08a-92c61b5901ab": Phase="Pending", Reason="", readiness=false. Elapsed: 155.816724ms
Sep  8 04:23:46.495: INFO: Pod "pod-12333b09-35e7-45f2-a08a-92c61b5901ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312112554s
Sep  8 04:23:48.653: INFO: Pod "pod-12333b09-35e7-45f2-a08a-92c61b5901ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.469524829s
STEP: Saw pod success
Sep  8 04:23:48.653: INFO: Pod "pod-12333b09-35e7-45f2-a08a-92c61b5901ab" satisfied condition "Succeeded or Failed"
Sep  8 04:23:48.809: INFO: Trying to get logs from node ip-172-20-61-194.ap-northeast-2.compute.internal pod pod-12333b09-35e7-45f2-a08a-92c61b5901ab container test-container: <nil>
STEP: delete the pod
Sep  8 04:23:49.129: INFO: Waiting for pod pod-12333b09-35e7-45f2-a08a-92c61b5901ab to disappear
Sep  8 04:23:49.286: INFO: Pod pod-12333b09-35e7-45f2-a08a-92c61b5901ab no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":12,"skipped":85,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 114 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":12,"skipped":107,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:49.639: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
Sep  8 04:23:39.734: INFO: PersistentVolumeClaim pvc-2zvm6 found but phase is Pending instead of Bound.
Sep  8 04:23:41.893: INFO: PersistentVolumeClaim pvc-2zvm6 found and phase=Bound (6.681550423s)
Sep  8 04:23:41.893: INFO: Waiting up to 3m0s for PersistentVolume local-hdgfs to have phase Bound
Sep  8 04:23:42.062: INFO: PersistentVolume local-hdgfs found and phase=Bound (169.195615ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9gx9
STEP: Creating a pod to test subpath
Sep  8 04:23:42.591: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9gx9" in namespace "provisioning-2531" to be "Succeeded or Failed"
Sep  8 04:23:42.750: INFO: Pod "pod-subpath-test-preprovisionedpv-9gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 158.425933ms
Sep  8 04:23:44.908: INFO: Pod "pod-subpath-test-preprovisionedpv-9gx9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316315141s
Sep  8 04:23:47.066: INFO: Pod "pod-subpath-test-preprovisionedpv-9gx9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475009339s
STEP: Saw pod success
Sep  8 04:23:47.066: INFO: Pod "pod-subpath-test-preprovisionedpv-9gx9" satisfied condition "Succeeded or Failed"
Sep  8 04:23:47.224: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-9gx9 container test-container-volume-preprovisionedpv-9gx9: <nil>
STEP: delete the pod
Sep  8 04:23:47.556: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9gx9 to disappear
Sep  8 04:23:47.714: INFO: Pod pod-subpath-test-preprovisionedpv-9gx9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9gx9
Sep  8 04:23:47.714: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9gx9" in namespace "provisioning-2531"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":16,"skipped":115,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:449
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:52.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7403" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":17,"skipped":116,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:52.444: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  8 04:23:49.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a" in namespace "projected-237" to be "Succeeded or Failed"
Sep  8 04:23:50.140: INFO: Pod "downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a": Phase="Pending", Reason="", readiness=false. Elapsed: 159.075302ms
Sep  8 04:23:52.393: INFO: Pod "downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.411674875s
STEP: Saw pod success
Sep  8 04:23:52.393: INFO: Pod "downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a" satisfied condition "Succeeded or Failed"
Sep  8 04:23:52.631: INFO: Trying to get logs from node ip-172-20-47-217.ap-northeast-2.compute.internal pod downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a container client-container: <nil>
STEP: delete the pod
Sep  8 04:23:52.993: INFO: Waiting for pod downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a to disappear
Sep  8 04:23:53.153: INFO: Pod downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:53.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-237" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":83,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:23:12.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 177 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":8,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:54.893: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  8 04:23:55.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6297" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":14,"skipped":98,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  8 04:23:54.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  8 04:23:52.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874" in namespace "downward-api-70" to be "Succeeded or Failed"
Sep  8 04:23:52.630: INFO: Pod "downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874": Phase="Pending", Reason="", readiness=false. Elapsed: 236.788519ms
Sep  8 04:23:54.791: INFO: Pod "downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397890926s
Sep  8 04:23:56.949: INFO: Pod "downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.555257173s
STEP: Saw pod success
Sep  8 04:23:56.949: INFO: Pod "downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874" satisfied condition "Succeeded or Failed"
Sep  8 04:23:57.106: INFO: Trying to get logs from node ip-172-20-53-124.ap-northeast-2.compute.internal pod downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874 container client-container: <nil>
STEP: delete the pod
Sep  8 04:23:57.450: INFO: Waiting for pod downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874 to disappear
Sep  8 04:23:57.612: INFO: Pod downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.677 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:57.942: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
• [SLOW TEST:23.074 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":11,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:23:58.843: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Sep  8 04:23:36.034: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep  8 04:23:36.034: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jqrj
STEP: Creating a pod to test atomic-volume-subpath
Sep  8 04:23:36.192: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jqrj" in namespace "provisioning-8427" to be "Succeeded or Failed"
Sep  8 04:23:36.348: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Pending", Reason="", readiness=false. Elapsed: 156.07806ms
Sep  8 04:23:38.506: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314117201s
Sep  8 04:23:40.722: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.529786333s
Sep  8 04:23:42.882: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Running", Reason="", readiness=true. Elapsed: 6.690103302s
Sep  8 04:23:45.038: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Running", Reason="", readiness=true. Elapsed: 8.846124734s
Sep  8 04:23:47.195: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Running", Reason="", readiness=true. Elapsed: 11.003502816s
... skipping 2 lines ...
Sep  8 04:23:53.730: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Running", Reason="", readiness=true. Elapsed: 17.538346842s
Sep  8 04:23:55.887: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Running", Reason="", readiness=true. Elapsed: 19.695587033s
Sep  8 04:23:58.046: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Running", Reason="", readiness=true. Elapsed: 21.853974738s
Sep  8 04:24:00.249: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Running", Reason="", readiness=true. Elapsed: 24.057511457s
Sep  8 04:24:02.407: INFO: Pod "pod-subpath-test-inlinevolume-jqrj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.214953762s
STEP: Saw pod success
Sep  8 04:24:02.407: INFO: Pod "pod-subpath-test-inlinevolume-jqrj" satisfied condition "Succeeded or Failed"
Sep  8 04:24:02.563: INFO: Trying to get logs from node ip-172-20-48-118.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-jqrj container test-container-subpath-inlinevolume-jqrj: <nil>
STEP: delete the pod
Sep  8 04:24:02.905: INFO: Waiting for pod pod-subpath-test-inlinevolume-jqrj to disappear
Sep  8 04:24:03.061: INFO: Pod pod-subpath-test-inlinevolume-jqrj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jqrj
Sep  8 04:24:03.061: INFO: Deleting pod "pod-subpath-test-inlinevolume-jqrj" in namespace "provisioning-8427"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  8 04:24:03.724: INFO: Only supported for providers [openstack] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 21808 lines ...






4/webserver-deployment-847dcfb7fb-tbvw6\" objectUID=b09555c3-820c-4ed8-bea0-788ced3aa551 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:09.141706       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-b4h6d\" objectUID=48db16f6-55a5-4707-b6c9-abb02a4096dd kind=\"Pod\" virtual=false\nI0908 04:28:09.155949       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-hng7m\" objectUID=ac95fed9-77d6-471e-b376-90938cd030be kind=\"Pod\" virtual=false\nI0908 04:28:09.160079       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-8rf5t\" objectUID=6d7c2086-95db-4d9c-8d12-ef21cad3044a kind=\"Pod\" virtual=false\nI0908 04:28:09.161326       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-lwc5x\" objectUID=f70f528c-6dc4-431f-a840-86fa404f5f27 kind=\"Pod\" virtual=false\nI0908 04:28:09.161750       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-wntmf\" objectUID=95f91bff-4681-4543-9488-6fb72c5a7dbd kind=\"Pod\" virtual=false\nI0908 04:28:09.161791       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-wllnh\" objectUID=5a6699db-ffb2-4b3d-9968-7bede5867056 kind=\"Pod\" virtual=false\nI0908 04:28:09.173799       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-s4xt5\" objectUID=0185b7be-eb8a-47fd-958c-6cfba73cf02d kind=\"Pod\" virtual=false\nI0908 04:28:09.204878       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-pw65s\" objectUID=567cb908-5f57-4ed6-8f33-3edfe71deed0 kind=\"Pod\" virtual=false\nI0908 04:28:09.255165       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-nz45t\" objectUID=a637953e-21b0-4709-be46-d723c23cda44 kind=\"Pod\" virtual=false\nI0908 04:28:09.302317       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-hpbxj\" objectUID=b95e0599-8967-4107-a9e4-bcf617f275c2 kind=\"Pod\" virtual=false\nI0908 04:28:09.351316       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-mztp9\" objectUID=45c4ca8e-8428-47fc-a853-309351803e05 kind=\"Pod\" virtual=false\nI0908 04:28:09.405474       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-wrph9\" objectUID=899fd36d-1df8-41d2-8a97-ad7ec42abd92 kind=\"Pod\" virtual=false\nI0908 04:28:09.455393       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-795d758f88-qj87w\" objectUID=f0ed9099-4e40-4a08-be15-f3e60b628fb3 kind=\"Pod\" virtual=false\nI0908 04:28:09.501738       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-9qq8q\" objectUID=83965404-9e7f-458f-b245-30bfa43b1e71 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:09.557162       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-bghxh\" objectUID=7ecdccb8-d9d6-4057-b921-786b6c87540a kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:09.582452       1 namespace_controller.go:185] Namespace has been deleted projected-2138\nI0908 04:28:09.605778       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-bcqjv\" objectUID=52c44a73-cbbb-4599-931e-37c8229f99f1 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:09.653938       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-rwjxf\" objectUID=d0d07711-5b9e-4488-8de4-3407825cc3cc kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:09.663837       1 pv_controller.go:930] claim \"volume-6151/pvc-cbw48\" bound to volume \"local-mlj7k\"\nI0908 04:28:09.669926       1 pv_controller.go:1341] isVolumeReleased[pvc-19f8d634-4186-4154-a8f9-68de3f7901d9]: volume is released\nI0908 04:28:09.676553       1 pv_controller.go:879] volume \"local-mlj7k\" entered phase \"Bound\"\nI0908 04:28:09.676605       1 pv_controller.go:982] volume \"local-mlj7k\" bound to claim \"volume-6151/pvc-cbw48\"\nI0908 04:28:09.685094       1 pv_controller.go:823] claim \"volume-6151/pvc-cbw48\" entered phase \"Bound\"\nI0908 04:28:09.707773       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-dnkjt\" objectUID=72d68cc5-e95b-41eb-b095-eaafd311c980 kind=\"CiliumEndpoint\" virtual=false\nE0908 04:28:09.746269       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-tbvw6\", UID:\"b09555c3-820c-4ed8-bea0-788ced3aa551\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-9744\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"edb7c4ec-331a-47f8-a42f-c38e71999d6a\", Controller:(*bool)(0xc0032c5c90), BlockOwnerDeletion:(*bool)(0xc0032c5c91)}}}: pods \"webserver-deployment-847dcfb7fb-tbvw6\" not found\nI0908 04:28:09.746311       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-jvdgt\" objectUID=6a2a22c5-7ec2-4313-b294-f394bfc33106 kind=\"CiliumEndpoint\" virtual=false\nE0908 04:28:09.795913       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-8lqmb\", UID:\"14204188-0e3f-496e-9a91-ed1a8e19ff2f\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-9744\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"edb7c4ec-331a-47f8-a42f-c38e71999d6a\", Controller:(*bool)(0xc003924310), BlockOwnerDeletion:(*bool)(0xc003924311)}}}: pods \"webserver-deployment-847dcfb7fb-8lqmb\" not found\nI0908 04:28:09.795958       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-vqzsk\" objectUID=278773a1-e567-491b-943b-480cc1e3878f kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:09.845080       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-tbvw6\" objectUID=b09555c3-820c-4ed8-bea0-788ced3aa551 kind=\"Pod\" virtual=false\nI0908 04:28:09.845106       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9744/webserver-deployment-847dcfb7fb-8lqmb\" objectUID=14204188-0e3f-496e-9a91-ed1a8e19ff2f kind=\"Pod\" virtual=false\nI0908 04:28:09.871400       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-northeast-2a/vol-03f97038306684e76\nI0908 04:28:09.871426       1 pv_controller.go:1436] volume \"pvc-19f8d634-4186-4154-a8f9-68de3f7901d9\" deleted\nI0908 04:28:09.879595       1 pv_controller_base.go:505] deletion of claim \"pvc-protection-1100/pvc-protectiont4jnx\" was already processed\nE0908 04:28:10.089772       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:10.555272       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/pause-pod-69897df75b\" objectUID=3b95c1f6-b88c-42f0-8114-bf9b5731e74c kind=\"ReplicaSet\" virtual=false\nI0908 04:28:10.555486       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"services-7796/pause-pod\"\nI0908 04:28:10.729758       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/echo-sourceip\" objectUID=9945b03f-614e-4ca8-b0a9-fd3af84f6d16 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:10.734496       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"services-7796/sourceip-test\" err=\"Operation cannot be fulfilled on endpoints \\\"sourceip-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0908 04:28:10.734813       1 event.go:291] \"Event occurred\" object=\"services-7796/sourceip-test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-7796/sourceip-test: Operation cannot be fulfilled on endpoints \\\"sourceip-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0908 04:28:10.749396       1 request.go:668] Waited for 1.002965274s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/apis/cilium.io/v2/namespaces/deployment-9744/ciliumendpoints/webserver-deployment-847dcfb7fb-jvdgt\nI0908 04:28:10.848303       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7796/pause-pod-69897df75b\" objectUID=3b95c1f6-b88c-42f0-8114-bf9b5731e74c kind=\"ReplicaSet\" propagationPolicy=Background\nI0908 04:28:10.894316       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/sourceip-test-djzv5\" objectUID=93023426-f31b-4a7e-a5d0-b0412a954126 kind=\"EndpointSlice\" virtual=false\nE0908 04:28:10.921490       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:10.946685       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/pause-pod-69897df75b-gjl6p\" objectUID=247b29f4-88a5-4b49-9508-47115a0508aa kind=\"Pod\" virtual=false\nI0908 04:28:10.946877       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/pause-pod-69897df75b-ts7l5\" objectUID=bf762742-4010-4f0b-bebe-0f71ea2bf3cc kind=\"Pod\" virtual=false\nI0908 04:28:10.996284       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7796/sourceip-test-djzv5\" objectUID=93023426-f31b-4a7e-a5d0-b0412a954126 kind=\"EndpointSlice\" propagationPolicy=Background\nE0908 04:28:11.033158       1 tokens_controller.go:262] error synchronizing serviceaccount apf-4220/default: secrets \"default-token-gbzq6\" is forbidden: unable to create new content in namespace apf-4220 because it is being terminated\nI0908 04:28:11.045411       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7796/pause-pod-69897df75b-gjl6p\" objectUID=247b29f4-88a5-4b49-9508-47115a0508aa kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:11.097253       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7796/pause-pod-69897df75b-ts7l5\" objectUID=bf762742-4010-4f0b-bebe-0f71ea2bf3cc kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:11.202105       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/pause-pod-69897df75b-gjl6p\" objectUID=3fda6795-8918-4dee-a480-48ba4b1bd697 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:11.225023       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [], removed: [resourcequota.example.com/v1, Resource=e2e-test-resourcequota-1852-crds]\nI0908 04:28:11.225161       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0908 04:28:11.225227       1 shared_informer.go:247] Caches are synced for resource quota \nI0908 04:28:11.225238       1 resource_quota_controller.go:454] synced quota controller\nI0908 04:28:11.251832       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/pause-pod-69897df75b-ts7l5\" objectUID=59ab971e-6d5e-4696-b858-ecd0fd4768f7 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:11.296348       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7796/pause-pod-69897df75b-gjl6p\" objectUID=3fda6795-8918-4dee-a480-48ba4b1bd697 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:11.353947       1 event.go:291] \"Event occurred\" object=\"statefulset-3293/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0908 04:28:11.395088       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"pause-pod-69897df75b-gjl6p\", UID:\"3fda6795-8918-4dee-a480-48ba4b1bd697\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"services-7796\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"pause-pod-69897df75b-gjl6p\", UID:\"247b29f4-88a5-4b49-9508-47115a0508aa\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0xc0037182a0)}}}: ciliumendpoints.cilium.io \"pause-pod-69897df75b-gjl6p\" not found\nI0908 04:28:11.400258       1 garbagecollector.go:471] \"Processing object\" object=\"services-7796/pause-pod-69897df75b-gjl6p\" objectUID=3fda6795-8918-4dee-a480-48ba4b1bd697 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:11.663952       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [resourcequota.example.com/v1, Resource=e2e-test-resourcequota-1852-crds]\nI0908 04:28:11.664034       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0908 04:28:11.664103       1 shared_informer.go:247] Caches are synced for garbage collector \nI0908 04:28:11.664112       1 garbagecollector.go:254] synced garbage collector\nE0908 04:28:11.689934       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0908 04:28:11.777915       1 tokens_controller.go:262] error synchronizing serviceaccount metrics-grabber-7473/default: secrets \"default-token-4dwrv\" is forbidden: unable to create new content in namespace metrics-grabber-7473 because it is being terminated\nI0908 04:28:12.252095       1 namespace_controller.go:185] Namespace has been deleted prestop-7428\nI0908 04:28:13.507015       1 namespace_controller.go:185] Namespace has been deleted disruption-1171\nI0908 04:28:14.173269       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-036cc0be710ffe40d\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:14.175588       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-036cc0be710ffe40d\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:14.322001       1 event.go:291] \"Event occurred\" object=\"deployment-8758/test-rolling-update-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-deployment-585b757574 to 1\"\nI0908 04:28:14.322374       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-8758/test-rolling-update-deployment-585b757574\" need=1 creating=1\nI0908 04:28:14.330673       1 event.go:291] \"Event occurred\" object=\"deployment-8758/test-rolling-update-deployment-585b757574\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-deployment-585b757574-nc2ww\"\nI0908 04:28:14.332959       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-8758/test-rolling-update-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0908 04:28:14.527101       1 namespace_controller.go:185] Namespace has been deleted deployment-9744\nI0908 04:28:14.566370       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2855/pvc-m49mw\"\nI0908 04:28:14.573359       1 pv_controller.go:640] volume \"local-mkzld\" is released and reclaim policy \"Retain\" will be executed\nI0908 04:28:14.577037       1 pv_controller.go:879] volume \"local-mkzld\" entered phase \"Released\"\nE0908 04:28:14.654815       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nI0908 04:28:14.729726       1 pv_controller_base.go:505] deletion of claim \"provisioning-2855/pvc-m49mw\" was already processed\nE0908 04:28:14.791549       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nE0908 04:28:14.918260       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nI0908 04:28:15.007599       1 namespace_controller.go:185] Namespace has been deleted configmap-1781\nE0908 04:28:15.118800       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nE0908 04:28:15.263113       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0908 04:28:15.279132       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nI0908 04:28:15.488222       1 namespace_controller.go:185] Namespace has been deleted pv-4939\nE0908 04:28:15.518536       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nI0908 04:28:15.617162       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8282/pvc-x6t6b\"\nI0908 04:28:15.626773       1 pv_controller.go:640] volume \"local-lc59j\" is released and reclaim policy \"Retain\" will be executed\nI0908 04:28:15.653953       1 pv_controller.go:879] volume \"local-lc59j\" entered phase \"Released\"\nE0908 04:28:15.765528       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:15.777289       1 pv_controller_base.go:505] deletion of claim \"provisioning-8282/pvc-x6t6b\" was already processed\nE0908 04:28:15.810846       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nI0908 04:28:15.828463       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-3137/aws55khg\"\nI0908 04:28:15.839082       1 pv_controller.go:640] volume \"pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0\" is released and reclaim policy \"Delete\" will be executed\nI0908 04:28:15.851771       1 pv_controller.go:879] volume \"pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0\" entered phase \"Released\"\nI0908 04:28:15.855034       1 pv_controller.go:1341] isVolumeReleased[pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0]: volume is released\nI0908 04:28:16.027582       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-036cc0be710ffe40d: error deleting EBS volume \"vol-036cc0be710ffe40d\" since volume is currently attached to \"i-099d1b4e99330f8ed\"\nE0908 04:28:16.027651       1 goroutinemap.go:150] Operation for \"delete-pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0[b0fab876-d9f7-4e9c-97f7-e20ca079bc0b]\" failed. No retries permitted until 2021-09-08 04:28:16.527629872 +0000 UTC m=+921.605304315 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-036cc0be710ffe40d\\\" since volume is currently attached to \\\"i-099d1b4e99330f8ed\\\"\"\nI0908 04:28:16.027802       1 event.go:291] \"Event occurred\" object=\"pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-036cc0be710ffe40d\\\" since volume is currently attached to \\\"i-099d1b4e99330f8ed\\\"\"\nI0908 04:28:16.156212       1 namespace_controller.go:185] Namespace has been deleted apf-4220\nE0908 04:28:16.314639       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nI0908 04:28:16.875975       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-7473\nE0908 04:28:17.074196       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nI0908 04:28:17.262886       1 namespace_controller.go:185] Namespace has been deleted volume-2147\nI0908 04:28:17.391814       1 event.go:291] \"Event occurred\" object=\"volume-expand-8005/awsb2j84\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0908 04:28:17.571252       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7226/default: secrets \"default-token-kg54q\" is forbidden: unable to create new content in namespace provisioning-7226 because it is being terminated\nI0908 04:28:17.731185       1 expand_controller.go:289] Ignoring the PVC \"volume-expand-6437/csi-hostpathrspdc\" (uid: \"f17a57dd-ad2f-4351-90df-efe7fb185452\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0908 04:28:17.731225       1 event.go:291] \"Event occurred\" object=\"volume-expand-6437/csi-hostpathrspdc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nE0908 04:28:18.492678       1 namespace_controller.go:162] deletion of namespace container-probe-615 failed: unexpected items still remain in namespace: container-probe-615 for gvr: /v1, Resource=pods\nE0908 04:28:19.207994       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-8536/inline-volume-jv7zr-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0908 04:28:19.208482       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536/inline-volume-jv7zr-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0908 04:28:19.222227       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-6552\nI0908 04:28:19.584806       1 aws.go:2291] Waiting for volume \"vol-036cc0be710ffe40d\" state: actual=detaching, desired=detached\nI0908 04:28:19.699724       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8536, name: inline-volume-jv7zr, uid: 7a8e8a36-8555-40c7-a568-7520b5ad7c23] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0908 04:28:19.700023       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-jv7zr-my-volume\" objectUID=4dbfe6dc-3d17-484f-978b-d0c151422612 kind=\"PersistentVolumeClaim\" virtual=false\nI0908 04:28:19.700472       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-jv7zr\" objectUID=7a8e8a36-8555-40c7-a568-7520b5ad7c23 kind=\"Pod\" virtual=false\nI0908 04:28:19.798481       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8536, name: inline-volume-jv7zr-my-volume, uid: 4dbfe6dc-3d17-484f-978b-d0c151422612] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8536, name: inline-volume-jv7zr, uid: 7a8e8a36-8555-40c7-a568-7520b5ad7c23] is deletingDependents\nI0908 04:28:19.817547       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8536/inline-volume-jv7zr-my-volume\" objectUID=4dbfe6dc-3d17-484f-978b-d0c151422612 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0908 04:28:19.833241       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-8536/inline-volume-jv7zr-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0908 04:28:19.833730       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536/inline-volume-jv7zr-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0908 04:28:19.854234       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-jv7zr-my-volume\" objectUID=4dbfe6dc-3d17-484f-978b-d0c151422612 kind=\"PersistentVolumeClaim\" virtual=false\nI0908 04:28:19.877202       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-8536/inline-volume-jv7zr-my-volume\"\nI0908 04:28:19.889928       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-jv7zr\" objectUID=7a8e8a36-8555-40c7-a568-7520b5ad7c23 kind=\"Pod\" virtual=false\nI0908 04:28:19.904564       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-8536, name: inline-volume-jv7zr, uid: 7a8e8a36-8555-40c7-a568-7520b5ad7c23]\nI0908 04:28:20.442595       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:20.447027       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:21.159872       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2460/aws74hv9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0908 04:28:21.316914       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2855/default: secrets \"default-token-zkdll\" is forbidden: unable to create new content in namespace provisioning-2855 because it is being terminated\nI0908 04:28:21.461940       1 namespace_controller.go:185] Namespace has been deleted services-7796\nI0908 04:28:21.676722       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-09-08 04:27:45 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbu\",\n  InstanceId: \"i-099d1b4e99330f8ed\",\n  State: \"detaching\",\n  VolumeId: \"vol-036cc0be710ffe40d\"\n}\nI0908 04:28:21.677001       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-036cc0be710ffe40d\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:22.370955       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-8758/test-rolling-update-controller\" need=0 deleting=1\nI0908 04:28:22.371398       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-8758/test-rolling-update-controller\" relatedReplicaSets=[test-rolling-update-deployment-585b757574 test-rolling-update-controller]\nI0908 04:28:22.371617       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-rolling-update-controller\" pod=\"deployment-8758/test-rolling-update-controller-98h8s\"\nI0908 04:28:22.378842       1 event.go:291] \"Event occurred\" object=\"deployment-8758/test-rolling-update-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-controller to 0\"\nI0908 04:28:22.389918       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-8758/test-rolling-update-controller-98h8s\" objectUID=4b9892dc-54db-4e30-ba8a-70cbfda3ead1 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:22.390860       1 event.go:291] \"Event occurred\" object=\"deployment-8758/test-rolling-update-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-controller-98h8s\"\nI0908 04:28:22.402617       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-8758/test-rolling-update-controller-98h8s\" objectUID=4b9892dc-54db-4e30-ba8a-70cbfda3ead1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:22.694103       1 namespace_controller.go:185] Namespace has been deleted provisioning-7226\nI0908 04:28:23.039145       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-6437/csi-hostpathrspdc\"\nI0908 04:28:23.177849       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-4223/netserver-0\" objectUID=9e68694b-86cc-467b-8b65-3b77e7ef72cc kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:23.177816       1 pv_controller.go:640] volume \"pvc-f17a57dd-ad2f-4351-90df-efe7fb185452\" is released and reclaim policy \"Delete\" will be executed\nI0908 04:28:23.197446       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-4223/netserver-0\" objectUID=9e68694b-86cc-467b-8b65-3b77e7ef72cc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:23.205515       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-4223/netserver-1\" objectUID=7759508f-3ca0-4c1a-9e9a-e6b1a4675550 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:23.214808       1 pv_controller.go:879] volume \"pvc-f17a57dd-ad2f-4351-90df-efe7fb185452\" entered phase \"Released\"\nI0908 04:28:23.228203       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-4223/netserver-1\" objectUID=7759508f-3ca0-4c1a-9e9a-e6b1a4675550 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:23.233097       1 pv_controller.go:1341] isVolumeReleased[pvc-f17a57dd-ad2f-4351-90df-efe7fb185452]: volume is released\nE0908 04:28:23.240950       1 tokens_controller.go:262] error synchronizing serviceaccount projected-7678/default: secrets \"default-token-97zgc\" is forbidden: unable to create new content in namespace projected-7678 because it is being terminated\nI0908 04:28:23.251077       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-4223/netserver-2\" objectUID=024dad7f-a8eb-47cb-8831-f46aa9b1675f kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:23.262321       1 pv_controller_base.go:505] deletion of claim \"volume-expand-6437/csi-hostpathrspdc\" was already processed\nI0908 04:28:23.262704       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-4223/netserver-2\" objectUID=024dad7f-a8eb-47cb-8831-f46aa9b1675f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:23.270347       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-4223/netserver-3\" objectUID=7c7a54a2-b914-4cb9-901b-dc2aecd3a634 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:23.282199       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-4223/netserver-3\" objectUID=7c7a54a2-b914-4cb9-901b-dc2aecd3a634 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:23.285782       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-4223/test-container-pod\" objectUID=ebc5ced6-019c-4c44-9055-80cdc7bb3583 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:23.298454       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-4223/test-container-pod\" objectUID=ebc5ced6-019c-4c44-9055-80cdc7bb3583 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:23.509142       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4792/e2e-test-crd-conversion-webhook-qkhrg\" objectUID=0e82a886-eb83-403e-88e0-3155e2813aa8 kind=\"EndpointSlice\" virtual=false\nI0908 04:28:23.515315       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4792/e2e-test-crd-conversion-webhook-qkhrg\" objectUID=0e82a886-eb83-403e-88e0-3155e2813aa8 kind=\"EndpointSlice\" propagationPolicy=Background\nE0908 04:28:23.598726       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-4223/default: secrets \"default-token-qxr9c\" is forbidden: unable to create new content in namespace nettest-4223 because it is being terminated\nI0908 04:28:23.645565       1 event.go:291] \"Event occurred\" object=\"job-6845/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-wqsrq\"\nI0908 04:28:23.705275       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"crd-webhook-4792/sample-crd-conversion-webhook-deployment\"\nI0908 04:28:23.705331       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4792/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=55a737d7-a7fb-4c7f-8197-4ed5b4206483 kind=\"ReplicaSet\" virtual=false\nI0908 04:28:23.707638       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4792/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=55a737d7-a7fb-4c7f-8197-4ed5b4206483 kind=\"ReplicaSet\" propagationPolicy=Background\nI0908 04:28:23.714137       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4792/sample-crd-conversion-webhook-deployment-697cdbd8f4-kjvtq\" objectUID=87561ecc-1db0-41a6-b0aa-f87d5fd6e1f1 kind=\"Pod\" virtual=false\nI0908 04:28:23.719081       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4792/sample-crd-conversion-webhook-deployment-697cdbd8f4-kjvtq\" objectUID=87561ecc-1db0-41a6-b0aa-f87d5fd6e1f1 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:23.727682       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4792/sample-crd-conversion-webhook-deployment-697cdbd8f4-kjvtq\" objectUID=8f7dd505-8839-4477-8725-f64d54b60609 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:23.732148       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4792/sample-crd-conversion-webhook-deployment-697cdbd8f4-kjvtq\" objectUID=8f7dd505-8839-4477-8725-f64d54b60609 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0908 04:28:24.430769       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:24.605992       1 stateful_set_control.go:489] StatefulSet statefulset-3293/ss2 terminating Pod ss2-2 for scale down\nI0908 04:28:24.610098       1 event.go:291] \"Event occurred\" object=\"statefulset-3293/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0908 04:28:24.665724       1 event.go:291] \"Event occurred\" object=\"volume-expand-8005/awsb2j84\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0908 04:28:24.668470       1 pv_controller.go:1341] isVolumeReleased[pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0]: volume is released\nI0908 04:28:24.863217       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-northeast-2a/vol-036cc0be710ffe40d\nI0908 04:28:24.863247       1 pv_controller.go:1436] volume \"pvc-1411ed6f-16ca-49df-bf9a-c32d5c8af5a0\" deleted\nI0908 04:28:24.872537       1 pv_controller_base.go:505] deletion of claim \"volumemode-3137/aws55khg\" was already processed\nI0908 04:28:25.215872       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/verify-service-up-exec-pod-44658\" objectUID=11c95ce4-4ac5-4188-a491-b9ad1e074cda kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:25.257072       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/verify-service-up-exec-pod-44658\" objectUID=11c95ce4-4ac5-4188-a491-b9ad1e074cda kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0908 04:28:25.401967       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-3863/default: secrets \"default-token-p9tbg\" is forbidden: unable to create new content in namespace secrets-3863 because it is being terminated\nI0908 04:28:25.917337       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:26.011361       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:26.057920       1 aws.go:2014] Assigned mount device cj -> volume vol-0f5dbdf6403260e2a\nI0908 04:28:26.164633       1 namespace_controller.go:185] Namespace has been deleted container-probe-615\nI0908 04:28:26.392295       1 aws.go:2427] AttachVolume volume=\"vol-0f5dbdf6403260e2a\" instance=\"i-099d1b4e99330f8ed\" request returned {\n  AttachTime: 2021-09-08 04:28:26.361 +0000 UTC,\n  Device: \"/dev/xvdcj\",\n  InstanceId: \"i-099d1b4e99330f8ed\",\n  State: \"attaching\",\n  VolumeId: \"vol-0f5dbdf6403260e2a\"\n}\nI0908 04:28:26.492361       1 namespace_controller.go:185] Namespace has been deleted provisioning-2855\nI0908 04:28:26.816961       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-09bee0ad558aad388\nI0908 04:28:26.869202       1 pv_controller.go:1677] volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" provisioned for claim \"fsgroupchangepolicy-2460/aws74hv9\"\nI0908 04:28:26.869377       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2460/aws74hv9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792 using kubernetes.io/aws-ebs\"\nI0908 04:28:26.873485       1 pv_controller.go:879] volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" entered phase \"Bound\"\nI0908 04:28:26.873514       1 pv_controller.go:982] volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" bound to claim \"fsgroupchangepolicy-2460/aws74hv9\"\nI0908 04:28:26.879649       1 pv_controller.go:823] claim \"fsgroupchangepolicy-2460/aws74hv9\" entered phase \"Bound\"\nI0908 04:28:27.147119       1 event.go:291] \"Event occurred\" object=\"job-6845/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-ps6rs\"\nE0908 04:28:27.156054       1 job_controller.go:404] Error syncing job: failed pod(s) detected for job key \"job-6845/backofflimit\"\nE0908 04:28:27.364874       1 pv_controller.go:1452] error finding provisioning plugin for claim volumemode-5481/pvc-s4zns: storageclass.storage.k8s.io \"volumemode-5481\" not found\nI0908 04:28:27.365140       1 event.go:291] \"Event occurred\" object=\"volumemode-5481/pvc-s4zns\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-5481\\\" not found\"\nI0908 04:28:27.520034       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:27.526313       1 pv_controller.go:879] volume \"local-ncdwp\" entered phase \"Available\"\nI0908 04:28:27.569584       1 aws.go:2014] Assigned mount device be -> volume vol-09bee0ad558aad388\nI0908 04:28:27.739461       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-f17a57dd-ad2f-4351-90df-efe7fb185452\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-6437^23132112-105d-11ec-ae56-2a7a411a1e22\") on node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nI0908 04:28:27.744938       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-f17a57dd-ad2f-4351-90df-efe7fb185452\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-6437^23132112-105d-11ec-ae56-2a7a411a1e22\") on node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nI0908 04:28:27.757730       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-f17a57dd-ad2f-4351-90df-efe7fb185452\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-6437^23132112-105d-11ec-ae56-2a7a411a1e22\") on node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nI0908 04:28:27.821482       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536-3666/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0908 04:28:27.948843       1 aws.go:2427] AttachVolume volume=\"vol-09bee0ad558aad388\" instance=\"i-099d1b4e99330f8ed\" request returned {\n  AttachTime: 2021-09-08 04:28:27.917 +0000 UTC,\n  Device: \"/dev/xvdbe\",\n  InstanceId: \"i-099d1b4e99330f8ed\",\n  State: \"attaching\",\n  VolumeId: \"vol-09bee0ad558aad388\"\n}\nI0908 04:28:28.186527       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-1705/pvc-zbjm9\"\nI0908 04:28:28.191725       1 pv_controller.go:640] volume \"pvc-1a26f2d3-ea57-410e-a326-287f773567cf\" is released and reclaim policy \"Delete\" will be executed\nI0908 04:28:28.198201       1 pv_controller.go:879] volume \"pvc-1a26f2d3-ea57-410e-a326-287f773567cf\" entered phase \"Released\"\nI0908 04:28:28.200785       1 pv_controller.go:1341] isVolumeReleased[pvc-1a26f2d3-ea57-410e-a326-287f773567cf]: volume is released\nI0908 04:28:28.268590       1 namespace_controller.go:185] Namespace has been deleted provisioning-8282\nI0908 04:28:28.322407       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536-3666/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0908 04:28:28.485366       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536-3666/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0908 04:28:28.511164       1 aws.go:2037] Releasing in-process attachment entry: cj -> volume vol-0f5dbdf6403260e2a\nI0908 04:28:28.511312       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:28.511434       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-6066/pod-c8cc5b8f-0482-4004-b15e-16896bbb9299\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\\\" \"\nI0908 04:28:28.578869       1 namespace_controller.go:185] Namespace has been deleted projected-7678\nI0908 04:28:28.653719       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536-3666/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0908 04:28:28.694306       1 namespace_controller.go:185] Namespace has been deleted nettest-4223\nI0908 04:28:28.817168       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536-3666/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0908 04:28:28.948808       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0908 04:28:29.023041       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-9314/pvc-zf8k8: storageclass.storage.k8s.io \"provisioning-9314\" not found\nI0908 04:28:29.023297       1 event.go:291] \"Event occurred\" object=\"provisioning-9314/pvc-zf8k8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9314\\\" not found\"\nI0908 04:28:29.173777       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-8758/test-rolling-update-deployment-585b757574\" need=1 creating=1\nI0908 04:28:29.184789       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-8758/test-rolling-update-deployment-585b757574-nc2ww\" objectUID=58bdf3fc-bd19-4ff2-8598-c2958b6ce1cf kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:29.194242       1 pv_controller.go:879] volume \"local-cdb5q\" entered phase \"Available\"\nI0908 04:28:29.210805       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-8758/test-rolling-update-deployment-585b757574-nc2ww\" objectUID=58bdf3fc-bd19-4ff2-8598-c2958b6ce1cf kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:29.294836       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-8758/test-rolling-update-controller\" objectUID=c627d0c9-546f-4433-a216-80fcdeac69ff kind=\"ReplicaSet\" virtual=false\nI0908 04:28:29.294997       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-8758/test-rolling-update-deployment\"\nI0908 04:28:29.295098       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-8758/test-rolling-update-deployment-585b757574\" objectUID=5b4598b4-9a3c-46a5-8a04-3375f10e85ba kind=\"ReplicaSet\" virtual=false\nI0908 04:28:29.302953       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-8758/test-rolling-update-deployment-585b757574\" objectUID=5b4598b4-9a3c-46a5-8a04-3375f10e85ba kind=\"ReplicaSet\" propagationPolicy=Background\nI0908 04:28:29.303374       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-8758/test-rolling-update-controller\" objectUID=c627d0c9-546f-4433-a216-80fcdeac69ff kind=\"ReplicaSet\" propagationPolicy=Background\nI0908 04:28:29.334499       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-8536\\\" or manually created by system administrator\"\nI0908 04:28:30.027006       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-4306/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0908 04:28:30.027889       1 event.go:291] \"Event occurred\" object=\"webhook-4306/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0908 04:28:30.036698       1 event.go:291] \"Event occurred\" object=\"webhook-4306/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-x2mgv\"\nI0908 04:28:30.041290       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-4306/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0908 04:28:30.063235       1 aws.go:2037] Releasing in-process attachment entry: be -> volume vol-09bee0ad558aad388\nI0908 04:28:30.064339       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:30.064538       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2460/pod-740d277c-e5cd-4410-963d-d2a867d44a8d\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\\\" \"\nI0908 04:28:30.609253       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replicaset-7449/test-rs\" need=1 creating=1\nI0908 04:28:30.615071       1 event.go:291] \"Event occurred\" object=\"replicaset-7449/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-46hqh\"\nI0908 04:28:30.832715       1 namespace_controller.go:185] Namespace has been deleted secrets-3863\nE0908 04:28:30.983601       1 pv_controller.go:1452] error finding provisioning plugin for claim volume-1370/pvc-bvcbk: storageclass.storage.k8s.io \"volume-1370\" not found\nI0908 04:28:30.983804       1 event.go:291] \"Event occurred\" object=\"volume-1370/pvc-bvcbk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-1370\\\" not found\"\nI0908 04:28:31.008303       1 namespace_controller.go:185] Namespace has been deleted projected-3148\nE0908 04:28:31.039813       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:31.047110       1 namespace_controller.go:185] Namespace has been deleted secret-namespace-1150\nI0908 04:28:31.150238       1 pv_controller.go:879] volume \"local-w5hm2\" entered phase \"Available\"\nE0908 04:28:31.537766       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:32.123730       1 pv_controller.go:879] volume \"pvc-06fb8533-4ea5-4a24-af85-8c80aa959120\" entered phase \"Bound\"\nI0908 04:28:32.123763       1 pv_controller.go:982] volume \"pvc-06fb8533-4ea5-4a24-af85-8c80aa959120\" bound to claim \"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\"\nI0908 04:28:32.131988       1 pv_controller.go:823] claim \"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\" entered phase \"Bound\"\nI0908 04:28:32.513401       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-06fb8533-4ea5-4a24-af85-8c80aa959120\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8536^394be10e-105d-11ec-a7c9-e684629e974b\") from node \"ip-172-20-48-118.ap-northeast-2.compute.internal\" \nI0908 04:28:32.631162       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-7833\nI0908 04:28:33.057801       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-06fb8533-4ea5-4a24-af85-8c80aa959120\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-8536^394be10e-105d-11ec-a7c9-e684629e974b\") from node \"ip-172-20-48-118.ap-northeast-2.compute.internal\" \nI0908 04:28:33.058077       1 event.go:291] \"Event occurred\" object=\"ephemeral-8536/inline-volume-tester-rffzv\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-06fb8533-4ea5-4a24-af85-8c80aa959120\\\" \"\nI0908 04:28:33.464947       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-4792\nI0908 04:28:34.254387       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-1a26f2d3-ea57-410e-a326-287f773567cf\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1705^4\") on node \"ip-172-20-53-124.ap-northeast-2.compute.internal\" \nI0908 04:28:34.256737       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-1a26f2d3-ea57-410e-a326-287f773567cf\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1705^4\") on node \"ip-172-20-53-124.ap-northeast-2.compute.internal\" \nI0908 04:28:34.406757       1 stateful_set_control.go:489] StatefulSet statefulset-3293/ss2 terminating Pod ss2-1 for scale down\nI0908 04:28:34.411723       1 event.go:291] \"Event occurred\" object=\"statefulset-3293/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0908 04:28:34.627406       1 stateful_set_control.go:523] StatefulSet statefulset-4006/ss terminating Pod ss-2 for update\nI0908 04:28:34.630479       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0908 04:28:34.672872       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-8077\nI0908 04:28:34.682113       1 namespace_controller.go:185] Namespace has been deleted e2e-privileged-pod-9269\nI0908 04:28:34.691284       1 namespace_controller.go:185] Namespace has been deleted deployment-8758\nI0908 04:28:34.806108       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-1a26f2d3-ea57-410e-a326-287f773567cf\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1705^4\") on node \"ip-172-20-53-124.ap-northeast-2.compute.internal\" \nI0908 04:28:35.213369       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-1705/pvc-zbjm9\" was already processed\nW0908 04:28:35.907985       1 utils.go:265] Service services-5052/service-headless-toggled using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0908 04:28:36.691508       1 namespace_controller.go:185] Namespace has been deleted downward-api-2598\nE0908 04:28:36.998518       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:37.004021       1 namespace_controller.go:185] Namespace has been deleted volumemode-3137\nI0908 04:28:37.157067       1 event.go:291] \"Event occurred\" object=\"job-6845/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0908 04:28:37.677717       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-3198be70-e10f-4c78-8a24-2ac470e655a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f77bfcaf03ad8dcb\") on node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nI0908 04:28:37.680316       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-3198be70-e10f-4c78-8a24-2ac470e655a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f77bfcaf03ad8dcb\") on node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nI0908 04:28:38.156307       1 namespace_controller.go:185] Namespace has been deleted pods-9392\nI0908 04:28:38.553697       1 event.go:291] \"Event occurred\" object=\"volume-1601/awsfhlpx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0908 04:28:38.852093       1 namespace_controller.go:185] Namespace has been deleted volume-expand-6437\nI0908 04:28:39.022344       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-attacher-6455447485\" objectUID=7959a283-679d-4efd-a6a9-3dd4a871e9d0 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:39.022355       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-6437-8303/csi-hostpath-attacher\nI0908 04:28:39.022401       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-attacher-0\" objectUID=f8b8406b-0c80-49d7-a616-1bd934c3da86 kind=\"Pod\" virtual=false\nI0908 04:28:39.024619       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-attacher-6455447485\" objectUID=7959a283-679d-4efd-a6a9-3dd4a871e9d0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:39.025399       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-attacher-0\" objectUID=f8b8406b-0c80-49d7-a616-1bd934c3da86 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:39.341790       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpathplugin-ldz5r\" objectUID=ce5eb44f-f3fe-418d-869d-9affe8c229d4 kind=\"EndpointSlice\" virtual=false\nI0908 04:28:39.356275       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpathplugin-ldz5r\" objectUID=ce5eb44f-f3fe-418d-869d-9affe8c229d4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0908 04:28:39.396839       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replicaset-7449/test-rs\" need=2 creating=1\nI0908 04:28:39.400905       1 event.go:291] \"Event occurred\" object=\"replicaset-7449/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-pnrj9\"\nI0908 04:28:39.535050       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpathplugin-786ddff999\" objectUID=6fffce20-5b70-47fa-aff5-cafd5f45132c kind=\"ControllerRevision\" virtual=false\nI0908 04:28:39.535388       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-6437-8303/csi-hostpathplugin\nI0908 04:28:39.535525       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpathplugin-0\" objectUID=3123d7be-0fc7-4500-b86d-162dc11eea88 kind=\"Pod\" virtual=false\nI0908 04:28:39.537400       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpathplugin-786ddff999\" objectUID=6fffce20-5b70-47fa-aff5-cafd5f45132c kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:39.538774       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpathplugin-0\" objectUID=3123d7be-0fc7-4500-b86d-162dc11eea88 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:39.553274       1 stateful_set_control.go:489] StatefulSet statefulset-3293/ss2 terminating Pod ss2-0 for scale down\nI0908 04:28:39.566307       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replicaset-7449/test-rs\" need=3 creating=1\nI0908 04:28:39.571386       1 event.go:291] \"Event occurred\" object=\"replicaset-7449/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-6knsf\"\nI0908 04:28:39.571410       1 event.go:291] \"Event occurred\" object=\"statefulset-3293/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0908 04:28:39.613913       1 namespace_controller.go:185] Namespace has been deleted provisioning-996\nI0908 04:28:39.665461       1 pv_controller.go:930] claim \"volume-1370/pvc-bvcbk\" bound to volume \"local-w5hm2\"\nI0908 04:28:39.672067       1 pv_controller.go:879] volume \"local-w5hm2\" entered phase \"Bound\"\nI0908 04:28:39.672256       1 pv_controller.go:982] volume \"local-w5hm2\" bound to claim \"volume-1370/pvc-bvcbk\"\nI0908 04:28:39.677567       1 pv_controller.go:823] claim \"volume-1370/pvc-bvcbk\" entered phase \"Bound\"\nI0908 04:28:39.677956       1 pv_controller.go:930] claim \"provisioning-9314/pvc-zf8k8\" bound to volume \"local-cdb5q\"\nI0908 04:28:39.685853       1 pv_controller.go:879] volume \"local-cdb5q\" entered phase \"Bound\"\nI0908 04:28:39.685878       1 pv_controller.go:982] volume \"local-cdb5q\" bound to claim \"provisioning-9314/pvc-zf8k8\"\nI0908 04:28:39.691312       1 pv_controller.go:823] claim \"provisioning-9314/pvc-zf8k8\" entered phase \"Bound\"\nI0908 04:28:39.691611       1 pv_controller.go:930] claim \"volumemode-5481/pvc-s4zns\" bound to volume \"local-ncdwp\"\nI0908 04:28:39.698337       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-provisioner-54845bd5f6\" objectUID=0af197a1-8187-48c3-8cc3-3d5f201c1018 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:39.698853       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-6437-8303/csi-hostpath-provisioner\nI0908 04:28:39.699064       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-provisioner-0\" objectUID=d024ba68-ff5c-41db-ab1c-8e6f79701139 kind=\"Pod\" virtual=false\nI0908 04:28:39.702909       1 pv_controller.go:879] volume \"local-ncdwp\" entered phase \"Bound\"\nI0908 04:28:39.703313       1 pv_controller.go:982] volume \"local-ncdwp\" bound to claim \"volumemode-5481/pvc-s4zns\"\nI0908 04:28:39.703283       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-provisioner-54845bd5f6\" objectUID=0af197a1-8187-48c3-8cc3-3d5f201c1018 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:39.709373       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-provisioner-0\" objectUID=d024ba68-ff5c-41db-ab1c-8e6f79701139 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:39.712934       1 pv_controller.go:823] claim \"volumemode-5481/pvc-s4zns\" entered phase \"Bound\"\nI0908 04:28:39.713697       1 event.go:291] \"Event occurred\" object=\"volume-expand-8005/awsb2j84\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0908 04:28:39.937251       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-6437-8303/csi-hostpath-resizer\nI0908 04:28:39.937274       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-resizer-6d68d899cf\" objectUID=bf11bd26-7dec-4863-9dc8-0e712a9323f6 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:39.937334       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-resizer-0\" objectUID=334ec52d-2870-45f0-860e-66888a14a6a2 kind=\"Pod\" virtual=false\nI0908 04:28:39.940069       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-resizer-6d68d899cf\" objectUID=bf11bd26-7dec-4863-9dc8-0e712a9323f6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:39.940393       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-resizer-0\" objectUID=334ec52d-2870-45f0-860e-66888a14a6a2 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:40.096579       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-6437-8303/csi-hostpath-snapshotter\nI0908 04:28:40.096642       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-snapshotter-fd7b9dc7c\" objectUID=f8f8ecd8-5bcc-457a-915a-f74d39b35f87 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:40.096738       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-6437-8303/csi-hostpath-snapshotter-0\" objectUID=aaa517e6-7ecf-433b-8835-30e759f711c9 kind=\"Pod\" virtual=false\nI0908 04:28:40.098749       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-snapshotter-fd7b9dc7c\" objectUID=f8f8ecd8-5bcc-457a-915a-f74d39b35f87 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:40.099758       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-6437-8303/csi-hostpath-snapshotter-0\" objectUID=aaa517e6-7ecf-433b-8835-30e759f711c9 kind=\"Pod\" propagationPolicy=Background\nE0908 04:28:40.229245       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0908 04:28:40.471944       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:40.799597       1 namespace_controller.go:185] Namespace has been deleted volume-7650\nE0908 04:28:41.317177       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:43.100383       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-3198be70-e10f-4c78-8a24-2ac470e655a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f77bfcaf03ad8dcb\") on node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nE0908 04:28:43.724061       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:44.200841       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-09a62c4bbae2b0b28\nI0908 04:28:44.241507       1 pv_controller.go:1677] volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" provisioned for claim \"volume-1601/awsfhlpx\"\nI0908 04:28:44.241638       1 event.go:291] \"Event occurred\" object=\"volume-1601/awsfhlpx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-b8a9f895-efef-4642-a03a-723cfba6c648 using kubernetes.io/aws-ebs\"\nI0908 04:28:44.245947       1 pv_controller.go:879] volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" entered phase \"Bound\"\nI0908 04:28:44.246003       1 pv_controller.go:982] volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" bound to claim \"volume-1601/awsfhlpx\"\nI0908 04:28:44.251380       1 pv_controller.go:823] claim \"volume-1601/awsfhlpx\" entered phase \"Bound\"\nE0908 04:28:44.635497       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:44.957566       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09a62c4bbae2b0b28\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:45.039684       1 aws.go:2014] Assigned mount device bx -> volume vol-09a62c4bbae2b0b28\nI0908 04:28:45.223388       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0908 04:28:45.259945       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3198be70-e10f-4c78-8a24-2ac470e655a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f77bfcaf03ad8dcb\") from node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nI0908 04:28:45.312141       1 aws.go:2014] Assigned mount device cg -> volume vol-0f77bfcaf03ad8dcb\nI0908 04:28:45.474240       1 aws.go:2427] AttachVolume volume=\"vol-09a62c4bbae2b0b28\" instance=\"i-099d1b4e99330f8ed\" request returned {\n  AttachTime: 2021-09-08 04:28:45.44 +0000 UTC,\n  Device: \"/dev/xvdbx\",\n  InstanceId: \"i-099d1b4e99330f8ed\",\n  State: \"attaching\",\n  VolumeId: \"vol-09a62c4bbae2b0b28\"\n}\nE0908 04:28:45.676641       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-6437-8303/default: secrets \"default-token-7rdz9\" is forbidden: unable to create new content in namespace volume-expand-6437-8303 because it is being terminated\nI0908 04:28:45.758598       1 aws.go:2427] AttachVolume volume=\"vol-0f77bfcaf03ad8dcb\" instance=\"i-0fe90adb1a5729ec3\" request returned {\n  AttachTime: 2021-09-08 04:28:45.724 +0000 UTC,\n  Device: \"/dev/xvdcg\",\n  InstanceId: \"i-0fe90adb1a5729ec3\",\n  State: \"attaching\",\n  VolumeId: \"vol-0f77bfcaf03ad8dcb\"\n}\nI0908 04:28:46.327250       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4306/e2e-test-webhook-wgcg8\" objectUID=eaab12e8-cc95-4755-8b35-b0e587614c6e kind=\"EndpointSlice\" virtual=false\nI0908 04:28:46.333774       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-4306/e2e-test-webhook-wgcg8\" objectUID=eaab12e8-cc95-4755-8b35-b0e587614c6e kind=\"EndpointSlice\" propagationPolicy=Background\nI0908 04:28:46.444102       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9314/pvc-zf8k8\"\nI0908 04:28:46.452307       1 pv_controller.go:640] volume \"local-cdb5q\" is released and reclaim policy \"Retain\" will be executed\nI0908 04:28:46.455968       1 pv_controller.go:879] volume \"local-cdb5q\" entered phase \"Released\"\nI0908 04:28:46.509009       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-4306/sample-webhook-deployment\"\nI0908 04:28:46.509168       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4306/sample-webhook-deployment-78988fc6cd\" objectUID=3230c5a1-cc79-42bd-b61b-3c6c7e56b460 kind=\"ReplicaSet\" virtual=false\nI0908 04:28:46.511139       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-4306/sample-webhook-deployment-78988fc6cd\" objectUID=3230c5a1-cc79-42bd-b61b-3c6c7e56b460 kind=\"ReplicaSet\" propagationPolicy=Background\nI0908 04:28:46.514655       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4306/sample-webhook-deployment-78988fc6cd-x2mgv\" objectUID=412e412d-c0f6-481d-8979-538d5c3f04ba kind=\"Pod\" virtual=false\nI0908 04:28:46.517291       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-4306/sample-webhook-deployment-78988fc6cd-x2mgv\" objectUID=412e412d-c0f6-481d-8979-538d5c3f04ba kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:46.530829       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4306/sample-webhook-deployment-78988fc6cd-x2mgv\" objectUID=69b537f3-bf26-4960-bc69-7a6e7ef54937 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:46.538308       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-4306/sample-webhook-deployment-78988fc6cd-x2mgv\" objectUID=69b537f3-bf26-4960-bc69-7a6e7ef54937 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:46.623073       1 pv_controller_base.go:505] deletion of claim \"provisioning-9314/pvc-zf8k8\" was already processed\nE0908 04:28:47.087457       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:47.579162       1 aws.go:2037] Releasing in-process attachment entry: bx -> volume vol-09a62c4bbae2b0b28\nI0908 04:28:47.579217       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09a62c4bbae2b0b28\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:47.579488       1 event.go:291] \"Event occurred\" object=\"volume-1601/exec-volume-test-dynamicpv-lzh5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\\\" \"\nI0908 04:28:47.899570       1 aws.go:2037] Releasing in-process attachment entry: cg -> volume vol-0f77bfcaf03ad8dcb\nI0908 04:28:47.899755       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-3198be70-e10f-4c78-8a24-2ac470e655a0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f77bfcaf03ad8dcb\") from node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" \nI0908 04:28:47.900057       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3198be70-e10f-4c78-8a24-2ac470e655a0\\\" \"\nI0908 04:28:48.036376       1 namespace_controller.go:185] Namespace has been deleted downward-api-6441\nI0908 04:28:48.124209       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1705\nI0908 04:28:48.142784       1 namespace_controller.go:185] Namespace has been deleted downward-api-4492\nI0908 04:28:48.350684       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-5fb86d888\" objectUID=32cf96ba-aa8d-4cab-8ae5-ad40b8d779f1 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:48.351088       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-1705-9376/csi-mockplugin\nI0908 04:28:48.351172       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-0\" objectUID=5b630bac-4999-47d7-839c-8c9673634851 kind=\"Pod\" virtual=false\nI0908 04:28:48.354389       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-0\" objectUID=5b630bac-4999-47d7-839c-8c9673634851 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:48.354514       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-5fb86d888\" objectUID=32cf96ba-aa8d-4cab-8ae5-ad40b8d779f1 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:48.510538       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-1705-9376/csi-mockplugin-attacher\nI0908 04:28:48.510568       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-attacher-5bbdb9b9b4\" objectUID=643fe953-7470-44b3-885a-055e0f3ef6b0 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:48.510721       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-attacher-0\" objectUID=2ff40e8c-032c-4564-9194-ba00d7a15bb5 kind=\"Pod\" virtual=false\nI0908 04:28:48.512779       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-attacher-5bbdb9b9b4\" objectUID=643fe953-7470-44b3-885a-055e0f3ef6b0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:48.513288       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1705-9376/csi-mockplugin-attacher-0\" objectUID=2ff40e8c-032c-4564-9194-ba00d7a15bb5 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:48.574013       1 namespace_controller.go:185] Namespace has been deleted job-6845\nI0908 04:28:48.719643       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5351\nI0908 04:28:48.819841       1 event.go:291] \"Event occurred\" object=\"volume-expand-8005/awsb2j84\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0908 04:28:48.824255       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-8005/awsb2j84\"\nI0908 04:28:49.156181       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-attacher-7c9c7f7b6d\" objectUID=38088b80-58ce-4904-ac3b-ba7d79256d5b kind=\"ControllerRevision\" virtual=false\nI0908 04:28:49.156567       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-5351-9652/csi-hostpath-attacher\nI0908 04:28:49.156740       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-attacher-0\" objectUID=76eb56d0-a194-419c-9c2e-6205471103ea kind=\"Pod\" virtual=false\nI0908 04:28:49.158731       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-attacher-0\" objectUID=76eb56d0-a194-419c-9c2e-6205471103ea kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:49.159061       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-attacher-7c9c7f7b6d\" objectUID=38088b80-58ce-4904-ac3b-ba7d79256d5b kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:49.488350       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpathplugin-djjmg\" objectUID=6c03c4bf-7626-429c-9175-0d1441bde463 kind=\"EndpointSlice\" virtual=false\nI0908 04:28:49.493402       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpathplugin-djjmg\" objectUID=6c03c4bf-7626-429c-9175-0d1441bde463 kind=\"EndpointSlice\" propagationPolicy=Background\nE0908 04:28:49.556131       1 tokens_controller.go:262] error synchronizing serviceaccount replicaset-7449/default: secrets \"default-token-fsq4v\" is forbidden: unable to create new content in namespace replicaset-7449 because it is being terminated\nI0908 04:28:49.621406       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replicaset-7449/test-rs\" need=3 creating=1\nI0908 04:28:49.625359       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-7449/test-rs-46hqh\" objectUID=9f27b290-ae28-4ab5-a3c3-16d2e588c122 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:49.635519       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-7449/test-rs-6knsf\" objectUID=1aa42152-19ae-40ef-9946-f07ef2bfa726 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:49.638954       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-7449/test-rs-46hqh\" objectUID=9f27b290-ae28-4ab5-a3c3-16d2e588c122 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:49.641718       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-7449/test-rs-6knsf\" objectUID=1aa42152-19ae-40ef-9946-f07ef2bfa726 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:49.652444       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-7449/test-rs-pnrj9\" objectUID=4ea21b29-f431-48ac-9ebe-2f7fea797d24 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:49.664130       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-7449/test-rs-pnrj9\" objectUID=4ea21b29-f431-48ac-9ebe-2f7fea797d24 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:49.672872       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpathplugin-6f8f4499b\" objectUID=d72a8980-8647-4fcc-b998-7825009950b0 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:49.673295       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-5351-9652/csi-hostpathplugin\nI0908 04:28:49.673382       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpathplugin-0\" objectUID=c607242a-6e8f-4341-8603-450cf7dbf451 kind=\"Pod\" virtual=false\nI0908 04:28:49.677028       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpathplugin-6f8f4499b\" objectUID=d72a8980-8647-4fcc-b998-7825009950b0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:49.677411       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpathplugin-0\" objectUID=c607242a-6e8f-4341-8603-450cf7dbf451 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:49.839142       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-5351-9652/csi-hostpath-provisioner\nI0908 04:28:49.839206       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-provisioner-7c667fc5b7\" objectUID=2f002fbf-d798-459f-a7ef-36211c54ec82 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:49.839299       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-provisioner-0\" objectUID=76232520-590f-4031-a920-90d186014a70 kind=\"Pod\" virtual=false\nI0908 04:28:49.841645       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-provisioner-0\" objectUID=76232520-590f-4031-a920-90d186014a70 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:49.842007       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-provisioner-7c667fc5b7\" objectUID=2f002fbf-d798-459f-a7ef-36211c54ec82 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:50.106874       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-resizer-59d56cb6\" objectUID=d7433f70-6b57-4348-9efc-e4e2e39e4672 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:50.107422       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-resizer-0\" objectUID=5d6bee03-cd70-4343-bb1f-db1d05a2bf7b kind=\"Pod\" virtual=false\nI0908 04:28:50.107447       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-5351-9652/csi-hostpath-resizer\nI0908 04:28:50.118119       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-resizer-59d56cb6\" objectUID=d7433f70-6b57-4348-9efc-e4e2e39e4672 kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:50.126829       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-resizer-0\" objectUID=5d6bee03-cd70-4343-bb1f-db1d05a2bf7b kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:50.276292       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-snapshotter-84d845b846\" objectUID=ede658f8-c577-4e19-82d8-9e01d721738d kind=\"ControllerRevision\" virtual=false\nI0908 04:28:50.276510       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-5351-9652/csi-hostpath-snapshotter\nI0908 04:28:50.276638       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5351-9652/csi-hostpath-snapshotter-0\" objectUID=6d52524f-fb40-4391-a4e2-d05ab80a5994 kind=\"Pod\" virtual=false\nI0908 04:28:50.278745       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-snapshotter-84d845b846\" objectUID=ede658f8-c577-4e19-82d8-9e01d721738d kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:50.279393       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5351-9652/csi-hostpath-snapshotter-0\" objectUID=6d52524f-fb40-4391-a4e2-d05ab80a5994 kind=\"Pod\" propagationPolicy=Background\nI0908 04:28:50.628724       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:50.646484       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:50.653406       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:50.657856       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nE0908 04:28:51.049483       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-4306/default: secrets \"default-token-rhz96\" is forbidden: unable to create new content in namespace webhook-4306 because it is being terminated\nE0908 04:28:51.668950       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:28:52.967954       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-6151/pvc-cbw48\"\nI0908 04:28:52.972231       1 pv_controller.go:640] volume \"local-mlj7k\" is released and reclaim policy \"Retain\" will be executed\nI0908 04:28:52.974815       1 pv_controller.go:879] volume \"local-mlj7k\" entered phase \"Released\"\nI0908 04:28:53.134291       1 pv_controller_base.go:505] deletion of claim \"volume-6151/pvc-cbw48\" was already processed\nI0908 04:28:53.199321       1 resource_quota_controller.go:307] Resource quota has been deleted kubectl-7803/million\nI0908 04:28:53.413597       1 stateful_set_control.go:523] StatefulSet statefulset-4006/ss terminating Pod ss-1 for update\nI0908 04:28:53.422027       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE0908 04:28:53.822273       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-1705-9376/default: secrets \"default-token-s8mc9\" is forbidden: unable to create new content in namespace csi-mock-volumes-1705-9376 because it is being terminated\nI0908 04:28:53.895604       1 namespace_controller.go:185] Namespace has been deleted projected-2958\nE0908 04:28:54.367675       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-8005/default: secrets \"default-token-lpq7v\" is forbidden: unable to create new content in namespace volume-expand-8005 because it is being terminated\nI0908 04:28:54.771251       1 namespace_controller.go:185] Namespace has been deleted replicaset-7449\nI0908 04:28:55.237533       1 stateful_set.go:419] StatefulSet has been deleted statefulset-3293/ss2\nI0908 04:28:55.237573       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-3293/ss2-5bbbc9fc94\" objectUID=d8e8df98-b0a7-4190-900c-3a2ecfcbb56a kind=\"ControllerRevision\" virtual=false\nI0908 04:28:55.237812       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-3293/ss2-677d6db895\" objectUID=32fcac56-52f6-4cfa-bc54-e0f4c2fd0cc5 kind=\"ControllerRevision\" virtual=false\nI0908 04:28:55.242176       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-3293/ss2-5bbbc9fc94\" objectUID=d8e8df98-b0a7-4190-900c-3a2ecfcbb56a kind=\"ControllerRevision\" propagationPolicy=Background\nI0908 04:28:55.242271       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-3293/ss2-677d6db895\" objectUID=32fcac56-52f6-4cfa-bc54-e0f4c2fd0cc5 kind=\"ControllerRevision\" propagationPolicy=Background\nE0908 04:28:55.678592       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-5351-9652/default: secrets \"default-token-g4ckw\" is forbidden: unable to create new content in namespace ephemeral-5351-9652 because it is being terminated\nI0908 04:28:55.680973       1 pv_controller.go:879] volume \"local-pvqj644\" entered phase \"Available\"\nI0908 04:28:55.830348       1 pv_controller.go:930] claim \"persistent-local-volumes-test-4014/pvc-7rc2w\" bound to volume \"local-pvqj644\"\nI0908 04:28:55.841063       1 pv_controller.go:879] volume \"local-pvqj644\" entered phase \"Bound\"\nI0908 04:28:55.841094       1 pv_controller.go:982] volume \"local-pvqj644\" bound to claim \"persistent-local-volumes-test-4014/pvc-7rc2w\"\nI0908 04:28:55.849732       1 pv_controller.go:823] claim \"persistent-local-volumes-test-4014/pvc-7rc2w\" entered phase \"Bound\"\nI0908 04:28:55.859522       1 namespace_controller.go:185] Namespace has been deleted volume-expand-6437-8303\nI0908 04:28:56.070806       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:56.263120       1 aws.go:2291] Waiting for volume \"vol-0f5dbdf6403260e2a\" state: actual=detaching, desired=detached\nI0908 04:28:56.268910       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-9325\nI0908 04:28:56.301606       1 namespace_controller.go:185] Namespace has been deleted webhook-4306\nI0908 04:28:56.326781       1 namespace_controller.go:185] Namespace has been deleted clientset-488\nI0908 04:28:56.350971       1 namespace_controller.go:185] Namespace has been deleted downward-api-5683\nI0908 04:28:56.436364       1 namespace_controller.go:185] Namespace has been deleted webhook-4306-markers\nI0908 04:28:56.563280       1 pv_controller.go:879] volume \"local-pv25zrc\" entered phase \"Available\"\nI0908 04:28:56.717267       1 pv_controller.go:930] claim \"persistent-local-volumes-test-9591/pvc-v7jkg\" bound to volume \"local-pv25zrc\"\nI0908 04:28:56.729215       1 pv_controller.go:879] volume \"local-pv25zrc\" entered phase \"Bound\"\nI0908 04:28:56.729245       1 pv_controller.go:982] volume \"local-pv25zrc\" bound to claim \"persistent-local-volumes-test-9591/pvc-v7jkg\"\nI0908 04:28:56.736872       1 pv_controller.go:823] claim \"persistent-local-volumes-test-9591/pvc-v7jkg\" entered phase \"Bound\"\nE0908 04:28:57.180570       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-8098/default: secrets \"default-token-jpv59\" is forbidden: unable to create new content in namespace downward-api-8098 because it is being terminated\nI0908 04:28:57.313901       1 aws.go:1819] Found instances in zones map[ap-northeast-2a:{}]\nI0908 04:28:57.507751       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-8536, name: inline-volume-tester-rffzv, uid: 934c7c38-fc20-426f-b238-81e266aff803] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0908 04:28:57.508615       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\" objectUID=06fb8533-4ea5-4a24-af85-8c80aa959120 kind=\"PersistentVolumeClaim\" virtual=false\nI0908 04:28:57.509373       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-tester-rffzv\" objectUID=03070a46-a1a3-48a1-a059-d76592f3d147 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:57.510953       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-tester-rffzv\" objectUID=934c7c38-fc20-426f-b238-81e266aff803 kind=\"Pod\" virtual=false\nI0908 04:28:57.524934       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-9591/pvc-v7jkg\"\nI0908 04:28:57.532203       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-8536, name: inline-volume-tester-rffzv, uid: 03070a46-a1a3-48a1-a059-d76592f3d147] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8536, name: inline-volume-tester-rffzv, uid: 934c7c38-fc20-426f-b238-81e266aff803] is deletingDependents\nI0908 04:28:57.532228       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8536, name: inline-volume-tester-rffzv-my-volume-0, uid: 06fb8533-4ea5-4a24-af85-8c80aa959120] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8536, name: inline-volume-tester-rffzv, uid: 934c7c38-fc20-426f-b238-81e266aff803] is deletingDependents\nI0908 04:28:57.537253       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\" objectUID=06fb8533-4ea5-4a24-af85-8c80aa959120 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0908 04:28:57.538202       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8536/inline-volume-tester-rffzv\" objectUID=03070a46-a1a3-48a1-a059-d76592f3d147 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:57.538474       1 pv_controller.go:640] volume \"local-pv25zrc\" is released and reclaim policy \"Retain\" will be executed\nI0908 04:28:57.543048       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\" objectUID=06fb8533-4ea5-4a24-af85-8c80aa959120 kind=\"PersistentVolumeClaim\" virtual=false\nI0908 04:28:57.543913       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-8536/inline-volume-tester-rffzv\" PVC=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\"\nI0908 04:28:57.544143       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\"\nI0908 04:28:57.547750       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-tester-rffzv\" objectUID=934c7c38-fc20-426f-b238-81e266aff803 kind=\"Pod\" virtual=false\nI0908 04:28:57.548200       1 pv_controller.go:879] volume \"local-pv25zrc\" entered phase \"Released\"\nI0908 04:28:57.548544       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-tester-rffzv\" objectUID=03070a46-a1a3-48a1-a059-d76592f3d147 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:57.550379       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\" objectUID=06fb8533-4ea5-4a24-af85-8c80aa959120 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0908 04:28:57.551537       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-8536, name: inline-volume-tester-rffzv-my-volume-0, uid: 06fb8533-4ea5-4a24-af85-8c80aa959120] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-8536, name: inline-volume-tester-rffzv, uid: 934c7c38-fc20-426f-b238-81e266aff803] is deletingDependents\nI0908 04:28:57.553370       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-8536/inline-volume-tester-rffzv-my-volume-0\" objectUID=06fb8533-4ea5-4a24-af85-8c80aa959120 kind=\"PersistentVolumeClaim\" virtual=false\nI0908 04:28:57.676038       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-9591/pvc-v7jkg\" was already processed\nI0908 04:28:58.001732       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"crd-webhook-8032/sample-crd-conversion-webhook-deployment\"\nI0908 04:28:58.002912       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:28:58.078224       1 aws.go:2014] Assigned mount device ba -> volume vol-09bee0ad558aad388\nI0908 04:28:58.204329       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-6066/awshszzm\"\nI0908 04:28:58.215814       1 pv_controller.go:640] volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" is released and reclaim policy \"Delete\" will be executed\nI0908 04:28:58.218536       1 pv_controller.go:879] volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" entered phase \"Released\"\nI0908 04:28:58.219701       1 pv_controller.go:1341] isVolumeReleased[pvc-19c36838-74df-4e5b-bdf7-a537f826bec3]: volume is released\nI0908 04:28:58.237163       1 namespace_controller.go:185] Namespace has been deleted kubectl-7803\nI0908 04:28:58.342997       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-0f5dbdf6403260e2a: error deleting EBS volume \"vol-0f5dbdf6403260e2a\" since volume is currently attached to \"i-099d1b4e99330f8ed\"\nE0908 04:28:58.343063       1 goroutinemap.go:150] Operation for \"delete-pvc-19c36838-74df-4e5b-bdf7-a537f826bec3[b2122ac3-b82d-4a61-aad2-f6ea123c3d68]\" failed. No retries permitted until 2021-09-08 04:28:58.843043998 +0000 UTC m=+963.920718441 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0f5dbdf6403260e2a\\\" since volume is currently attached to \\\"i-099d1b4e99330f8ed\\\"\"\nI0908 04:28:58.343148       1 event.go:291] \"Event occurred\" object=\"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0f5dbdf6403260e2a\\\" since volume is currently attached to \\\"i-099d1b4e99330f8ed\\\"\"\nI0908 04:28:58.345299       1 namespace_controller.go:185] Namespace has been deleted secrets-1220\nI0908 04:28:58.349794       1 aws.go:2291] Waiting for volume \"vol-0f5dbdf6403260e2a\" state: actual=detaching, desired=detached\nI0908 04:28:58.401618       1 namespace_controller.go:185] Namespace has been deleted provisioning-9314\nI0908 04:28:58.412425       1 aws.go:2427] AttachVolume volume=\"vol-09bee0ad558aad388\" instance=\"i-099d1b4e99330f8ed\" request returned {\n  AttachTime: 2021-09-08 04:28:58.378 +0000 UTC,\n  Device: \"/dev/xvdba\",\n  InstanceId: \"i-099d1b4e99330f8ed\",\n  State: \"attaching\",\n  VolumeId: \"vol-09bee0ad558aad388\"\n}\nI0908 04:28:58.434210       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/verify-service-up-exec-pod-wck4t\" objectUID=301e6445-5020-4acc-8187-ea08d0aa10b6 kind=\"CiliumEndpoint\" virtual=false\nI0908 04:28:58.436996       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-1803/test-quota\nI0908 04:28:58.438223       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/verify-service-up-exec-pod-wck4t\" objectUID=301e6445-5020-4acc-8187-ea08d0aa10b6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0908 04:28:58.617286       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0908 04:28:58.840069       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6819-928/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0908 04:28:59.001884       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6819-928/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0908 04:28:59.433181       1 namespace_controller.go:185] Namespace has been deleted volume-expand-8005\nI0908 04:28:59.858336       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1601/awsfhlpx\"\nI0908 04:28:59.878308       1 pv_controller.go:640] volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" is released and reclaim policy \"Delete\" will be executed\nI0908 04:28:59.883287       1 pv_controller.go:879] volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" entered phase \"Released\"\nI0908 04:28:59.896602       1 pv_controller.go:1341] isVolumeReleased[pvc-b8a9f895-efef-4642-a03a-723cfba6c648]: volume is released\nI0908 04:29:00.075471       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"proxy-7365/proxy-service-pfxzn\" need=1 creating=1\nI0908 04:29:00.076509       1 aws_util.go:62] Error deleting EBS Disk volume aws://ap-northeast-2a/vol-09a62c4bbae2b0b28: error deleting EBS volume \"vol-09a62c4bbae2b0b28\" since volume is currently attached to \"i-099d1b4e99330f8ed\"\nE0908 04:29:00.076923       1 goroutinemap.go:150] Operation for \"delete-pvc-b8a9f895-efef-4642-a03a-723cfba6c648[6f872b0d-8fdf-44cc-ac87-b154dcd665cb]\" failed. No retries permitted until 2021-09-08 04:29:00.576902423 +0000 UTC m=+965.654576871 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-09a62c4bbae2b0b28\\\" since volume is currently attached to \\\"i-099d1b4e99330f8ed\\\"\"\nI0908 04:29:00.077107       1 event.go:291] \"Event occurred\" object=\"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-09a62c4bbae2b0b28\\\" since volume is currently attached to \\\"i-099d1b4e99330f8ed\\\"\"\nI0908 04:29:00.084755       1 event.go:291] \"Event occurred\" object=\"proxy-7365/proxy-service-pfxzn\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: proxy-service-pfxzn-ww9p6\"\nI0908 04:29:00.531582       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-5481/pvc-s4zns\"\nI0908 04:29:00.535744       1 aws.go:2037] Releasing in-process attachment entry: ba -> volume vol-09bee0ad558aad388\nI0908 04:29:00.536762       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:29:00.537149       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2460/pod-7af4d9e3-970d-4607-8202-2b97cd16161c\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\\\" \"\nI0908 04:29:00.541905       1 pv_controller.go:640] volume \"local-ncdwp\" is released and reclaim policy \"Retain\" will be executed\nI0908 04:29:00.545035       1 pv_controller.go:879] volume \"local-ncdwp\" entered phase \"Released\"\nI0908 04:29:00.564147       1 namespace_controller.go:185] Namespace has been deleted firewall-test-9293\nI0908 04:29:00.629711       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09a62c4bbae2b0b28\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:29:00.632971       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09a62c4bbae2b0b28\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:29:00.700062       1 pv_controller_base.go:505] deletion of claim \"volumemode-5481/pvc-s4zns\" was already processed\nE0908 04:29:00.720159       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0908 04:29:01.219320       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-3293/default: secrets \"default-token-hnxb8\" is forbidden: unable to create new content in namespace statefulset-3293 because it is being terminated\nE0908 04:29:01.938117       1 tokens_controller.go:262] error synchronizing serviceaccount volume-6151/default: secrets \"default-token-lcxqh\" is forbidden: unable to create new content in namespace volume-6151 because it is being terminated\nI0908 04:29:02.298466       1 namespace_controller.go:185] Namespace has been deleted downward-api-8098\nI0908 04:29:02.423258       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-09-08 04:28:26 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcj\",\n  InstanceId: \"i-099d1b4e99330f8ed\",\n  State: \"detaching\",\n  VolumeId: \"vol-0f5dbdf6403260e2a\"\n}\nI0908 04:29:02.423307       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:29:02.659178       1 aws_util.go:113] Successfully created EBS Disk volume aws://ap-northeast-2a/vol-0765e5ecb4bf33621\nI0908 04:29:02.721453       1 pv_controller.go:1677] volume \"pvc-bb3e9fa9-653b-4d1f-8cf0-03b268a434e0\" provisioned for claim \"topology-1042/pvc-97lfp\"\nI0908 04:29:02.721789       1 event.go:291] \"Event occurred\" object=\"topology-1042/pvc-97lfp\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-bb3e9fa9-653b-4d1f-8cf0-03b268a434e0 using kubernetes.io/aws-ebs\"\nI0908 04:29:02.725412       1 pv_controller.go:879] volume \"pvc-bb3e9fa9-653b-4d1f-8cf0-03b268a434e0\" entered phase \"Bound\"\nI0908 04:29:02.725642       1 pv_controller.go:982] volume \"pvc-bb3e9fa9-653b-4d1f-8cf0-03b268a434e0\" bound to claim \"topology-1042/pvc-97lfp\"\nI0908 04:29:02.731609       1 pv_controller.go:823] claim \"topology-1042/pvc-97lfp\" entered phase \"Bound\"\nE0908 04:29:02.824734       1 pv_controller.go:1452] error finding provisioning plugin for claim volume-3398/pvc-shzhf: storageclass.storage.k8s.io \"volume-3398\" not found\nI0908 04:29:02.825017       1 event.go:291] \"Event occurred\" object=\"volume-3398/pvc-shzhf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-3398\\\" not found\"\nI0908 04:29:02.986849       1 pv_controller.go:879] volume \"local-fb9mg\" entered phase \"Available\"\nE0908 04:29:03.679014       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:29:03.972902       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1705-9376\nE0908 04:29:04.026604       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:29:04.494570       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-7053/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0908 04:29:04.495503       1 event.go:291] \"Event occurred\" object=\"webhook-7053/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0908 04:29:04.504470       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-7053/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0908 04:29:04.505007       1 event.go:291] \"Event occurred\" object=\"webhook-7053/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-h546r\"\nI0908 04:29:04.562543       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-bb3e9fa9-653b-4d1f-8cf0-03b268a434e0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0765e5ecb4bf33621\") from node \"ip-172-20-53-124.ap-northeast-2.compute.internal\" \nI0908 04:29:04.644154       1 aws.go:2014] Assigned mount device cm -> volume vol-0765e5ecb4bf33621\nI0908 04:29:04.813491       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4014/pod-011471ec-d46b-4e71-96be-00a271f48252\" PVC=\"persistent-local-volumes-test-4014/pvc-7rc2w\"\nI0908 04:29:04.813522       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4014/pvc-7rc2w\"\nI0908 04:29:04.816147       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6819/pvc-vm6mn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6819\\\" or manually created by system administrator\"\nI0908 04:29:04.816176       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6819/pvc-vm6mn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6819\\\" or manually created by system administrator\"\nI0908 04:29:04.827465       1 pv_controller.go:879] volume \"pvc-fd662ec2-a539-4711-a737-d13cf77795d9\" entered phase \"Bound\"\nI0908 04:29:04.827503       1 pv_controller.go:982] volume \"pvc-fd662ec2-a539-4711-a737-d13cf77795d9\" bound to claim \"csi-mock-volumes-6819/pvc-vm6mn\"\nI0908 04:29:04.833418       1 pv_controller.go:823] claim \"csi-mock-volumes-6819/pvc-vm6mn\" entered phase \"Bound\"\nI0908 04:29:04.979853       1 aws.go:2427] AttachVolume volume=\"vol-0765e5ecb4bf33621\" instance=\"i-09adeb68df49b2ff0\" request returned {\n  AttachTime: 2021-09-08 04:29:04.945 +0000 UTC,\n  Device: \"/dev/xvdcm\",\n  InstanceId: \"i-09adeb68df49b2ff0\",\n  State: \"attaching\",\n  VolumeId: \"vol-0765e5ecb4bf33621\"\n}\nI0908 04:29:05.471353       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-fd662ec2-a539-4711-a737-d13cf77795d9\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6819^4\") from node \"ip-172-20-48-118.ap-northeast-2.compute.internal\" \nI0908 04:29:05.587571       1 namespace_controller.go:185] Namespace has been deleted hostpath-5671\nE0908 04:29:05.588383       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:29:06.001104       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-695/test-orphan-deployment\"\nI0908 04:29:06.028880       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09a62c4bbae2b0b28\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:29:06.030805       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-fd662ec2-a539-4711-a737-d13cf77795d9\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6819^4\") from node \"ip-172-20-48-118.ap-northeast-2.compute.internal\" \nI0908 04:29:06.030972       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-6819/pvc-volume-tester-ktljd\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fd662ec2-a539-4711-a737-d13cf77795d9\\\" \"\nI0908 04:29:06.061337       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5351-9652\nI0908 04:29:06.230692       1 namespace_controller.go:185] Namespace has been deleted statefulset-3293\nI0908 04:29:06.300658       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4014/pod-011471ec-d46b-4e71-96be-00a271f48252\" PVC=\"persistent-local-volumes-test-4014/pvc-7rc2w\"\nI0908 04:29:06.300686       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4014/pvc-7rc2w\"\nE0908 04:29:06.958886       1 tokens_controller.go:262] error synchronizing serviceaccount projected-5226/default: secrets \"default-token-57qdl\" is forbidden: unable to create new content in namespace projected-5226 because it is being terminated\nI0908 04:29:06.968776       1 namespace_controller.go:185] Namespace has been deleted volume-6151\nI0908 04:29:07.085222       1 aws.go:2037] Releasing in-process attachment entry: cm -> volume vol-0765e5ecb4bf33621\nI0908 04:29:07.085854       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-bb3e9fa9-653b-4d1f-8cf0-03b268a434e0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-0765e5ecb4bf33621\") from node \"ip-172-20-53-124.ap-northeast-2.compute.internal\" \nI0908 04:29:07.086091       1 event.go:291] \"Event occurred\" object=\"topology-1042/pod-3591bd10-de91-46d2-a5e8-cad71e8d1445\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-bb3e9fa9-653b-4d1f-8cf0-03b268a434e0\\\" \"\nI0908 04:29:07.717936       1 namespace_controller.go:185] Namespace has been deleted gc-1172\nI0908 04:29:07.814017       1 garbagecollector.go:471] \"Processing object\" object=\"proxy-7365/proxy-service-pfxzn-ww9p6\" objectUID=6cbd49fa-f059-43da-acb2-8cbc92a9af37 kind=\"Pod\" virtual=false\nI0908 04:29:07.842221       1 garbagecollector.go:580] \"Deleting object\" object=\"proxy-7365/proxy-service-pfxzn-ww9p6\" objectUID=6cbd49fa-f059-43da-acb2-8cbc92a9af37 kind=\"Pod\" propagationPolicy=Background\nI0908 04:29:07.992700       1 pv_controller.go:879] volume \"local-pv74b48\" entered phase \"Available\"\nI0908 04:29:08.150331       1 pv_controller.go:930] claim \"persistent-local-volumes-test-1991/pvc-rm7lh\" bound to volume \"local-pv74b48\"\nI0908 04:29:08.166577       1 pv_controller.go:879] volume \"local-pv74b48\" entered phase \"Bound\"\nI0908 04:29:08.166604       1 pv_controller.go:982] volume \"local-pv74b48\" bound to claim \"persistent-local-volumes-test-1991/pvc-rm7lh\"\nI0908 04:29:08.175360       1 pv_controller.go:823] claim \"persistent-local-volumes-test-1991/pvc-rm7lh\" entered phase \"Bound\"\nI0908 04:29:08.980987       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-1991/pvc-rm7lh\"\nI0908 04:29:08.991748       1 pv_controller.go:640] volume \"local-pv74b48\" is released and reclaim policy \"Retain\" will be executed\nI0908 04:29:08.995151       1 pv_controller.go:879] volume \"local-pv74b48\" entered phase \"Released\"\nI0908 04:29:09.000444       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-6252/sample-webhook-deployment\"\nI0908 04:29:09.116963       1 namespace_controller.go:185] Namespace has been deleted resourcequota-1803\nI0908 04:29:09.131065       1 namespace_controller.go:185] Namespace has been deleted provisioning-9410\nE0908 04:29:09.141338       1 tokens_controller.go:262] error synchronizing serviceaccount node-lease-test-2338/default: secrets \"default-token-6jbjx\" is forbidden: unable to create new content in namespace node-lease-test-2338 because it is being terminated\nI0908 04:29:09.149747       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-1991/pvc-rm7lh\" was already processed\nE0908 04:29:09.649881       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-4436/default: secrets \"default-token-s6tr8\" is forbidden: unable to create new content in namespace subpath-4436 because it is being terminated\nI0908 04:29:09.666848       1 pv_controller.go:930] claim \"volume-3398/pvc-shzhf\" bound to volume \"local-fb9mg\"\nI0908 04:29:09.668013       1 pv_controller.go:1341] isVolumeReleased[pvc-19c36838-74df-4e5b-bdf7-a537f826bec3]: volume is released\nI0908 04:29:09.670239       1 pv_controller.go:1341] isVolumeReleased[pvc-b8a9f895-efef-4642-a03a-723cfba6c648]: volume is released\nI0908 04:29:09.674947       1 pv_controller.go:879] volume \"local-fb9mg\" entered phase \"Bound\"\nI0908 04:29:09.674979       1 pv_controller.go:982] volume \"local-fb9mg\" bound to claim \"volume-3398/pvc-shzhf\"\nI0908 04:29:09.682570       1 pv_controller.go:823] claim \"volume-3398/pvc-shzhf\" entered phase \"Bound\"\nE0908 04:29:09.702145       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0908 04:29:09.832049       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-northeast-2a/vol-09a62c4bbae2b0b28\nI0908 04:29:09.832080       1 pv_controller.go:1436] volume \"pvc-b8a9f895-efef-4642-a03a-723cfba6c648\" deleted\nI0908 04:29:09.838820       1 pv_controller_base.go:505] deletion of claim \"volume-1601/awsfhlpx\" was already processed\nI0908 04:29:09.870302       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ap-northeast-2a/vol-0f5dbdf6403260e2a\nI0908 04:29:09.870345       1 pv_controller.go:1436] volume \"pvc-19c36838-74df-4e5b-bdf7-a537f826bec3\" deleted\nI0908 04:29:09.875719       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-6066/awshszzm\" was already processed\nI0908 04:29:10.149652       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/service-headless-gzkb9\" objectUID=0177c903-bedd-46ea-9412-d6de563200d6 kind=\"Pod\" virtual=false\nI0908 04:29:10.149822       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/service-headless-474rn\" objectUID=a48f0efa-e493-4cea-a73a-a9eb3ed05abf kind=\"Pod\" virtual=false\nI0908 04:29:10.150081       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/service-headless-8x6sc\" objectUID=0e6dd69b-d530-411e-8fd9-c25bd1994af5 kind=\"Pod\" virtual=false\nI0908 04:29:10.152625       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/service-headless-474rn\" objectUID=a48f0efa-e493-4cea-a73a-a9eb3ed05abf kind=\"Pod\" propagationPolicy=Background\nI0908 04:29:10.153098       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/service-headless-toggled-8tf2x\" objectUID=2f8b39d9-c4b5-4543-86ff-7af100a5ee76 kind=\"Pod\" virtual=false\nI0908 04:29:10.153476       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/service-headless-toggled-gr84p\" objectUID=8dcec178-80d9-49ba-9433-f699ec6ecee2 kind=\"Pod\" virtual=false\nI0908 04:29:10.153760       1 garbagecollector.go:471] \"Processing object\" object=\"services-5052/service-headless-toggled-7l7wh\" objectUID=1ab0eb8a-9e7c-4000-80c4-8f314a00d805 kind=\"Pod\" virtual=false\nI0908 04:29:10.155223       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/service-headless-gzkb9\" objectUID=0177c903-bedd-46ea-9412-d6de563200d6 kind=\"Pod\" propagationPolicy=Background\nI0908 04:29:10.155574       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/service-headless-8x6sc\" objectUID=0e6dd69b-d530-411e-8fd9-c25bd1994af5 kind=\"Pod\" propagationPolicy=Background\nI0908 04:29:10.157197       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/service-headless-toggled-gr84p\" objectUID=8dcec178-80d9-49ba-9433-f699ec6ecee2 kind=\"Pod\" propagationPolicy=Background\nW0908 04:29:10.160980       1 utils.go:265] Service services-5052/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0908 04:29:10.164266       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/service-headless-toggled-8tf2x\" objectUID=2f8b39d9-c4b5-4543-86ff-7af100a5ee76 kind=\"Pod\" propagationPolicy=Background\nI0908 04:29:10.165240       1 garbagecollector.go:580] \"Deleting object\" object=\"services-5052/service-headless-toggled-7l7wh\" objectUID=1ab0eb8a-9e7c-4000-80c4-8f314a00d805 kind=\"Pod\" propagationPolicy=Background\nW0908 04:29:10.168866       1 utils.go:265] Service services-5052/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW0908 04:29:10.173839       1 utils.go:265] Service services-5052/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0908 04:29:10.204975       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"services-5052/service-headless-toggled\" err=\"Operation cannot be fulfilled on endpoints \\\"service-headless-toggled\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0908 04:29:10.205143       1 event.go:291] \"Event occurred\" object=\"services-5052/service-headless-toggled\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-5052/service-headless-toggled: Operation cannot be fulfilled on endpoints \\\"service-headless-toggled\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0908 04:29:10.206829       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"service-headless-toggled.16a2be63a2dc6869\", GenerateName:\"\", Namespace:\"services-5052\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Endpoints\", Namespace:\"services-5052\", Name:\"service-headless-toggled\", UID:\"713bd131-d879-467b-b9b1-8dc0ad6634bd\", APIVersion:\"v1\", ResourceVersion:\"33273\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpoint\", Message:\"Failed to update endpoint services-5052/service-headless-toggled: Operation cannot be fulfilled on endpoints \\\"service-headless-toggled\\\": the object has been modified; please apply your changes to the latest version and try again\", Source:v1.EventSource{Component:\"endpoint-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0462ce58c370c69, ext:975282608705, loc:(*time.Location)(0x7301440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0462ce58c370c69, ext:975282608705, loc:(*time.Location)(0x7301440)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"service-headless-toggled.16a2be63a2dc6869\" is forbidden: unable to create new content in namespace services-5052 because it is being terminated' (will not retry!)\nI0908 04:29:10.248200       1 stateful_set_control.go:523] StatefulSet statefulset-4006/ss terminating Pod ss-0 for update\nI0908 04:29:10.262864       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0908 04:29:10.310020       1 event.go:291] \"Event occurred\" object=\"provisioning-7641-3543/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nE0908 04:29:10.337268       1 tokens_controller.go:262] error synchronizing serviceaccount services-5052/default: secrets \"default-token-m6qbz\" is forbidden: unable to create new content in namespace services-5052 because it is being terminated\nE0908 04:29:10.517697       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-4659/default: secrets \"default-token-9n6tv\" is forbidden: unable to create new content in namespace configmap-4659 because it is being terminated\nI0908 04:29:10.682736       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nI0908 04:29:10.684931       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-839e2563-22e2-4cfd-b7cd-614b4b7c9792\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-09bee0ad558aad388\") on node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" \nE0908 04:29:10.712970       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nI0908 04:29:10.815862       1 event.go:291] \"Event occurred\" object=\"provisioning-7641-3543/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE0908 04:29:10.886306       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3397/default: secrets \"default-token-xbmzs\" is forbidden: unable to create new content in namespace provisioning-3397 because it is being terminated\nI0908 04:29:10.962708       1 event.go:291] \"Event occurred\" object=\"provisioning-7641-3543/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nE0908 04:29:11.046015       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nI0908 04:29:11.130464       1 event.go:291] \"Event occurred\" object=\"provisioning-7641-3543/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nE0908 04:29:11.228618       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nI0908 04:29:11.251468       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-multi-to-single-ver.example.com/v5, Resource=e2e-test-crd-publish-openapi-6860-crds], removed: []\nI0908 04:29:11.251750       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-6860-crds.crd-publish-openapi-test-multi-to-single-ver.example.com\nI0908 04:29:11.251926       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0908 04:29:11.293610       1 event.go:291] \"Event occurred\" object=\"provisioning-7641-3543/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0908 04:29:11.352117       1 shared_informer.go:247] Caches are synced for resource quota \nI0908 04:29:11.352150       1 resource_quota_controller.go:454] synced quota controller\nE0908 04:29:11.408111       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nE0908 04:29:11.585550       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nI0908 04:29:11.696373       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-multi-to-single-ver.example.com/v5, Resource=e2e-test-crd-publish-openapi-6860-crds], removed: []\nI0908 04:29:11.717818       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0908 04:29:11.717896       1 shared_informer.go:247] Caches are synced for garbage collector \nI0908 04:29:11.717906       1 garbagecollector.go:254] synced garbage collector\nI0908 04:29:11.794095       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0908 04:29:11.869894       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nW0908 04:29:11.890796       1 reconciler.go:335] Multi-Attach error for volume \"pvc-4b098389-1f11-4a19-9fa9-d9e0f7b9c162\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-071b9c81a8b720c5c\") from node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" Volume is already exclusively attached to node ip-172-20-48-118.ap-northeast-2.compute.internal and can't be attached to another\nI0908 04:29:11.891082       1 event.go:291] \"Event occurred\" object=\"statefulset-4006/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-4b098389-1f11-4a19-9fa9-d9e0f7b9c162\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0908 04:29:11.980539       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9591\nI0908 04:29:11.997251       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-4b098389-1f11-4a19-9fa9-d9e0f7b9c162\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-071b9c81a8b720c5c\") on node \"ip-172-20-48-118.ap-northeast-2.compute.internal\" \nI0908 04:29:12.000679       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-4b098389-1f11-4a19-9fa9-d9e0f7b9c162\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ap-northeast-2a/vol-071b9c81a8b720c5c\") on node \"ip-172-20-48-118.ap-northeast-2.compute.internal\" \nI0908 04:29:12.081548       1 event.go:291] \"Event occurred\" object=\"provisioning-7641/pvc-wdshh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-7641\\\" or manually created by system administrator\"\nE0908 04:29:12.171951       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nI0908 04:29:12.227442       1 namespace_controller.go:185] Namespace has been deleted projected-5226\nI0908 04:29:12.561322       1 namespace_controller.go:185] Namespace has been deleted volumemode-5481\nE0908 04:29:12.623285       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nI0908 04:29:13.000503       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-695/test-adopt-deployment\"\nE0908 04:29:13.099927       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-5479/pvc-5rs6x: storageclass.storage.k8s.io \"provisioning-5479\" not found\nI0908 04:29:13.100335       1 event.go:291] \"Event occurred\" object=\"provisioning-5479/pvc-5rs6x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5479\\\" not found\"\nI0908 04:29:13.210538       1 namespace_controller.go:185] Namespace has been deleted multi-az-2764\nI0908 04:29:13.283033       1 pv_controller.go:879] volume \"local-lzpkz\" entered phase \"Available\"\nE0908 04:29:13.738781       1 tokens_controller.go:262] error synchronizing serviceaccount containers-5242/default: secrets \"default-token-4r8kk\" is forbidden: unable to create new content in namespace containers-5242 because it is being terminated\nE0908 04:29:13.831359       1 namespace_controller.go:162] deletion of namespace services-5052 failed: unexpected items still remain in namespace: services-5052 for gvr: /v1, Resource=pods\nI0908 04:29:13.886969       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7053/e2e-test-webhook-5dwlg\" objectUID=5d5d7b9b-447b-4cb0-aa11-f444e63e60ff kind=\"EndpointSlice\" virtual=false\nI0908 04:29:13.900245       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7053/e2e-test-webhook-5dwlg\" objectUID=5d5d7b9b-447b-4cb0-aa11-f444e63e60ff kind=\"EndpointSlice\" propagationPolicy=Background\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-34-152.ap-northeast-2.compute.internal ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-34-152.ap-northeast-2.compute.internal ====\nI0908 04:12:07.951491       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0908 04:12:07.954004       1 flags.go:59] FLAG: --address=\"0.0.0.0\"\nI0908 04:12:07.954081       1 flags.go:59] FLAG: --algorithm-provider=\"\"\nI0908 04:12:07.954102       1 flags.go:59] FLAG: --allow-metric-labels=\"[]\"\nI0908 04:12:07.954126       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0908 04:12:07.954161       1 flags.go:59] FLAG: --authentication-kubeconfig=\"/var/lib/kube-scheduler/kubeconfig\"\nI0908 04:12:07.954201       1 flags.go:59] FLAG: --authentication-skip-lookup=\"false\"\nI0908 04:12:07.954224       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0908 04:12:07.954244       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0908 04:12:07.954276       1 flags.go:59] FLAG: --authorization-always-allow-paths=\"[/healthz,/readyz,/livez]\"\nI0908 04:12:07.954318       1 flags.go:59] FLAG: --authorization-kubeconfig=\"/var/lib/kube-scheduler/kubeconfig\"\nI0908 04:12:07.954339       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0908 04:12:07.954358       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0908 04:12:07.954375       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0908 04:12:07.954422       1 flags.go:59] FLAG: --cert-dir=\"\"\nI0908 04:12:07.954443       1 flags.go:59] FLAG: --client-ca-file=\"\"\nI0908 04:12:07.954460       1 flags.go:59] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0908 04:12:07.954479       1 flags.go:59] FLAG: --contention-profiling=\"true\"\nI0908 04:12:07.954521       1 flags.go:59] FLAG: --disabled-metrics=\"[]\"\nI0908 04:12:07.954542       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0908 04:12:07.954560       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0908 04:12:07.954597       1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight=\"1\"\nI0908 04:12:07.954628       1 flags.go:59] FLAG: --help=\"false\"\nI0908 04:12:07.954648       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0908 04:12:07.954668       1 flags.go:59] FLAG: --kube-api-burst=\"100\"\nI0908 04:12:07.954686       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0908 04:12:07.954731       1 flags.go:59] FLAG: --kube-api-qps=\"50\"\nI0908 04:12:07.954753       1 flags.go:59] FLAG: --kubeconfig=\"\"\nI0908 04:12:07.954770       1 flags.go:59] FLAG: --leader-elect=\"true\"\nI0908 04:12:07.954788       1 flags.go:59] FLAG: --leader-elect-lease-duration=\"15s\"\nI0908 04:12:07.954833       1 flags.go:59] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0908 04:12:07.954853       1 flags.go:59] FLAG: --leader-elect-resource-lock=\"leases\"\nI0908 04:12:07.954871       1 flags.go:59] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0908 04:12:07.954911       1 flags.go:59] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0908 04:12:07.955020       1 flags.go:59] FLAG: --leader-elect-retry-period=\"2s\"\nI0908 04:12:07.955050       1 flags.go:59] FLAG: --lock-object-name=\"kube-scheduler\"\nI0908 04:12:07.955069       1 flags.go:59] FLAG: --lock-object-namespace=\"kube-system\"\nI0908 04:12:07.955088       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0908 04:12:07.955126       1 flags.go:59] FLAG: --log-dir=\"\"\nI0908 04:12:07.955155       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-scheduler.log\"\nI0908 04:12:07.955233       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0908 04:12:07.955258       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0908 04:12:07.955277       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0908 04:12:07.955316       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0908 04:12:07.955355       1 flags.go:59] FLAG: --master=\"\"\nI0908 04:12:07.955426       1 flags.go:59] FLAG: --one-output=\"false\"\nI0908 04:12:07.955456       1 flags.go:59] FLAG: --permit-address-sharing=\"false\"\nI0908 04:12:07.955477       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0908 04:12:07.955535       1 flags.go:59] FLAG: --policy-config-file=\"\"\nI0908 04:12:07.955564       1 flags.go:59] FLAG: --policy-configmap=\"\"\nI0908 04:12:07.955584       1 flags.go:59] FLAG: --policy-configmap-namespace=\"kube-system\"\nI0908 04:12:07.955643       1 flags.go:59] FLAG: --port=\"10251\"\nI0908 04:12:07.955673       1 flags.go:59] FLAG: --profiling=\"true\"\nI0908 04:12:07.955693       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[]\"\nI0908 04:12:07.955752       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"\"\nI0908 04:12:07.955781       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0908 04:12:07.955859       1 flags.go:59] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0908 04:12:07.955899       1 flags.go:59] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0908 04:12:07.955921       1 flags.go:59] FLAG: --scheduler-name=\"default-scheduler\"\nI0908 04:12:07.955977       1 flags.go:59] FLAG: --secure-port=\"10259\"\nI0908 04:12:07.956009       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0908 04:12:07.956028       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0908 04:12:07.956087       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0908 04:12:07.956116       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0908 04:12:07.956136       1 flags.go:59] FLAG: --tls-cert-file=\"/srv/kubernetes/kube-scheduler/server.crt\"\nI0908 04:12:07.956194       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0908 04:12:07.956233       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0908 04:12:07.956311       1 flags.go:59] FLAG: --tls-private-key-file=\"/srv/kubernetes/kube-scheduler/server.key\"\nI0908 04:12:07.956342       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0908 04:12:07.956364       1 flags.go:59] FLAG: --use-legacy-policy-config=\"false\"\nI0908 04:12:07.956382       1 flags.go:59] FLAG: --v=\"2\"\nI0908 04:12:07.958065       1 flags.go:59] FLAG: --version=\"false\"\nI0908 04:12:07.958102       1 flags.go:59] FLAG: --vmodule=\"\"\nI0908 04:12:07.958119       1 flags.go:59] FLAG: --write-config-to=\"\"\nI0908 04:12:07.974418       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for \"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\"\nW0908 04:12:18.427525       1 authentication.go:337] Error looking up in-cluster authentication configuration: Get \"https://127.0.0.1/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": net/http: TLS handshake timeout\nW0908 04:12:18.427557       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.\nW0908 04:12:18.427563       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false\nI0908 04:12:39.011015       1 factory.go:195] \"Creating scheduler from algorithm provider\" algorithmProvider=\"DefaultProvider\"\nI0908 04:12:39.025470       1 configfile.go:72] Using component config:\napiVersion: kubescheduler.config.k8s.io/v1beta1\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 100\n  contentType: application/vnd.kubernetes.protobuf\n  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n  qps: 50\nenableContentionProfiling: true\nenableProfiling: true\nhealthzBindAddress: 0.0.0.0:10251\nkind: KubeSchedulerConfiguration\nleaderElection:\n  leaderElect: true\n  leaseDuration: 15s\n  renewDeadline: 10s\n  resourceLock: leases\n  resourceName: kube-scheduler\n  resourceNamespace: kube-system\n  retryPeriod: 2s\nmetricsBindAddress: 0.0.0.0:10251\nparallelism: 16\npercentageOfNodesToScore: 0\npodInitialBackoffSeconds: 1\npodMaxBackoffSeconds: 10\nprofiles:\n- pluginConfig:\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: DefaultPreemptionArgs\n      minCandidateNodesAbsolute: 100\n      minCandidateNodesPercentage: 10\n    name: DefaultPreemption\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      hardPodAffinityWeight: 1\n      kind: InterPodAffinityArgs\n    name: InterPodAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeAffinityArgs\n    name: NodeAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesFitArgs\n    name: NodeResourcesFit\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesLeastAllocatedArgs\n      resources:\n      - name: cpu\n        weight: 1\n      - name: memory\n        weight: 1\n    name: NodeResourcesLeastAllocated\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      defaultingType: System\n      kind: PodTopologySpreadArgs\n    name: PodTopologySpread\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      bindTimeoutSeconds: 600\n      kind: VolumeBindingArgs\n    name: VolumeBinding\n  plugins:\n    bind:\n      enabled:\n      - name: DefaultBinder\n        weight: 0\n    filter:\n      enabled:\n      - name: NodeUnschedulable\n        weight: 0\n      - name: NodeName\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: NodeResourcesFit\n        weight: 0\n      - name: VolumeRestrictions\n        weight: 0\n      - name: EBSLimits\n        weight: 0\n      - name: GCEPDLimits\n        weight: 0\n      - name: NodeVolumeLimits\n        weight: 0\n      - name: AzureDiskLimits\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: VolumeZone\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n    permit: {}\n    postBind: {}\n    postFilter:\n      enabled:\n      - name: DefaultPreemption\n        weight: 0\n    preBind:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    preFilter:\n      enabled:\n      - name: NodeResourcesFit\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n    preScore:\n      enabled:\n      - name: InterPodAffinity\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n    queueSort:\n      enabled:\n      - name: PrioritySort\n        weight: 0\n    reserve:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n\nI0908 04:12:39.025499       1 server.go:138] Starting Kubernetes Scheduler version v1.21.4\nW0908 04:12:39.030979       1 authorization.go:47] Authorization is disabled\nW0908 04:12:39.031000       1 authentication.go:47] Authentication is disabled\nI0908 04:12:39.031014       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI0908 04:12:39.049042       1 tlsconfig.go:200] loaded serving cert [\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\"]: \"kube-scheduler\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\"kubernetes-ca\" (2021-09-06 04:11:17 +0000 UTC to 2022-12-18 12:11:17 +0000 UTC (now=2021-09-08 04:12:39.049020355 +0000 UTC))\nI0908 04:12:39.049324       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1631074328\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1631074328\" (2021-09-08 03:12:07 +0000 UTC to 2022-09-08 03:12:07 +0000 UTC (now=2021-09-08 04:12:39.049304199 +0000 UTC))\nI0908 04:12:39.049352       1 secure_serving.go:197] Serving securely on [::]:10259\nI0908 04:12:39.049480       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0908 04:12:39.049493       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\nI0908 04:12:39.049512       1 dynamic_serving_content.go:130] Starting serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\nI0908 04:12:39.049546       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nE0908 04:12:41.122252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nE0908 04:12:41.122439       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nE0908 04:12:41.122574       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope\nE0908 04:12:41.122763       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nE0908 04:12:41.122819       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE0908 04:12:41.122868       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE0908 04:12:41.122912       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nE0908 04:12:41.122969       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE0908 04:12:41.123014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nE0908 04:12:41.123060       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nE0908 04:12:41.123117       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope\nE0908 04:12:41.123172       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE0908 04:12:41.123256       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nE0908 04:12:41.129316       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nE0908 04:12:41.997350       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nE0908 04:12:42.027530       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope\nE0908 04:12:42.053702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE0908 04:12:42.085757       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope\nE0908 04:12:42.110006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nE0908 04:12:42.117093       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"\nE0908 04:12:42.133246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nI0908 04:12:42.464523       1 node_tree.go:65] Added node \"ip-172-20-34-152.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0908 04:12:44.751343       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...\nI0908 04:12:44.759161       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler\nI0908 04:12:45.349840       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file \nI0908 04:12:45.350336       1 tlsconfig.go:178] loaded client CA [0/\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\"]: \"kubernetes-ca\" [] issuer=\"<self>\" (2021-09-06 04:10:11 +0000 UTC to 2031-09-06 04:10:11 +0000 UTC (now=2021-09-08 04:12:45.35031758 +0000 UTC))\nI0908 04:12:45.350576       1 tlsconfig.go:200] loaded serving cert [\"serving-cert::/srv/kubernetes/kube-scheduler/server.crt::/srv/kubernetes/kube-scheduler/server.key\"]: \"kube-scheduler\" [serving] validServingFor=[kube-scheduler.kube-system.svc.cluster.local] issuer=\"kubernetes-ca\" (2021-09-06 04:11:17 +0000 UTC to 2022-12-18 12:11:17 +0000 UTC (now=2021-09-08 04:12:45.350563463 +0000 UTC))\nI0908 04:12:45.350816       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1631074328\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1631074328\" (2021-09-08 03:12:07 +0000 UTC to 2022-09-08 03:12:07 +0000 UTC (now=2021-09-08 04:12:45.3508027 +0000 UTC))\nI0908 04:13:10.137199       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-9v9cw\" node=\"ip-172-20-34-152.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0908 04:13:10.405452       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:13:10.432559       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:13:10.459897       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-operator-9d769d8b4-skjth\" node=\"ip-172-20-34-152.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0908 04:13:10.460288       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-6dd7cb5b5d-lc8sp\" node=\"ip-172-20-34-152.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0908 04:13:11.778812       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:13:11.779094       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:13:31.489372       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:13:31.489551       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:13:31.514752       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-8sxh2\" node=\"ip-172-20-34-152.ap-northeast-2.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0908 04:13:35.793237       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:13:35.793549       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0908 04:14:14.239998       1 node_tree.go:65] Added node \"ip-172-20-61-194.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0908 04:14:14.240624       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0908 04:14:14.281154       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0908 04:14:14.311291       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-lvkgh\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=2 feasibleNodes=1\nI0908 04:14:17.397980       1 node_tree.go:65] Added node \"ip-172-20-48-118.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0908 04:14:17.429652       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-gd48d\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI0908 04:14:18.754663       1 node_tree.go:65] Added node \"ip-172-20-53-124.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0908 04:14:18.780143       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-r2gzk\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=4 feasibleNodes=1\nI0908 04:14:24.828532       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" err=\"0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0908 04:14:24.840252       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" err=\"0/4 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0908 04:14:26.740530       1 node_tree.go:65] Added node \"ip-172-20-47-217.ap-northeast-2.compute.internal\" in group \"ap-northeast-2:\\x00:ap-northeast-2a\" to NodeTree\nI0908 04:14:26.772215       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-hdphv\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:14:34.834966       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0908 04:14:35.835699       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0908 04:14:44.846642       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-5dc785954d-lkrf6\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:14:45.848138       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-84d4cfd89c-djqzn\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:15:02.958682       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-5dc785954d-7r2x7\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:54.172005       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-1175/pod-test\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:54.380274       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7829/adopt-release-jwtjx\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:54.380279       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7829/adopt-release-542dv\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:54.527346       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"sysctl-2640/sysctl-35829d11-d532-48a8-8b97-ea6225cac224\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:54.531830       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-7331/pod-configmaps-89af6b03-20f2-443f-835c-6c76c5666c87\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:54.662627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2084/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-4cwdp\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:54.981333       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-3290/pod-projected-secrets-01f2d2d2-2ff9-4d90-bc0b-a0c2c3bba928\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:55.026452       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2714/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-ttwbn\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:55.354257       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6095/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:55.579897       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7897/fail-once-local-q4vph\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:55.587740       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3284/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-qz5hw\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:55.587943       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6095/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:55.634323       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7897/fail-once-local-fcrlj\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:55.661092       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-8283/labelsupdate5a5290b3-5b2c-402b-91e0-f4f3434e125f\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:55.661573       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-8197/sample-webhook-deployment-78988fc6cd-d2ng2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:55.671748       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1947/kube-proxy-mode-detector\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:55.691626       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6095/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:55.854898       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6095/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:56.232326       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-5756/test-webserver-af420e93-32e5-4def-9606-79970f5eef36\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:56.276238       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3486/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-vlclg\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:56.800490       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1975/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-v5crp\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:17:58.122326       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-3883/agnhost-primary-lr9p7\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:58.898157       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-8033/exceed-active-deadline-ld5zl\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:58.911020       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-8033/exceed-active-deadline-47g28\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:17:58.998930       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-5018/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:17:59.836025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3492/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-mqrcq\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:00.184965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-1390/httpd\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:00.848265       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5732/pod-subpath-test-dynamicpv-47kd\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:00.855865       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7273/pod-subpath-test-dynamicpv-7mxm\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:00.879889       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1557/aws-injector\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:00.979597       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-5018/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:02.583791       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3851/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-7vblx\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:02.981932       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-5018/pfpod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:03.016974       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6895/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-78tc9\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:05.806381       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-5018/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:06.486983       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7897/fail-once-local-npkvh\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:06.984987       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-5018/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:07.502026       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7897/fail-once-local-lml7g\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:07.542939       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6895/pod-ff346c3b-e96d-4979-98a7-10e495adbd0a\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:08.527863       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-7829/adopt-release-qvmt9\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:08.986672       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-5018/burstable-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:10.252885       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2686/pod-subpath-test-dynamicpv-z6hx\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:10.440434       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1947/affinity-nodeport-timeout-g8khj\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:10.449858       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1947/affinity-nodeport-timeout-mkllv\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:10.454554       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1947/affinity-nodeport-timeout-d7bw7\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:10.472948       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-431/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-q5xrl\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:10.987803       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5542/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-c5w8h\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:11.407410       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3851/pod-subpath-test-preprovisionedpv-gjp8\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:11.795704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2084/pod-subpath-test-preprovisionedpv-cwkx\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:12.016989       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2714/pod-subpath-test-preprovisionedpv-bqm9\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:12.203928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-4238/pod-secrets-bf4c8a7d-4de2-414e-9e13-985d72c979fb\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:12.216161       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3486/pod-subpath-test-preprovisionedpv-jd5s\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:12.237939       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3284/pod-subpath-test-preprovisionedpv-gsbc\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:14.102756       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5131-3980/csi-hostpath-attacher-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:14.582778       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5131-3980/csi-hostpathplugin-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:14.749494       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5131-3980/csi-hostpath-provisioner-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:14.791502       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3071/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:14.895998       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5131-3980/csi-hostpath-resizer-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:14.943222       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3071/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:15.060855       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5131-3980/csi-hostpath-snapshotter-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:15.111256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3071/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:15.267504       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3071/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:15.557139       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5542/pod-6c155831-1dad-4599-bde0-261a08fea3fc\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:15.633494       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4182/httpd\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:16.136379       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1796/ss-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:17.071852       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-431/pod-3776d485-708d-4a22-8282-3d7bd6baa30d\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:17.425858       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1947/execpod-affinitynmkhc\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:17.660485       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"var-expansion-2193/var-expansion-629c8355-05e8-4525-a46c-ebdd794577f6\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:18.708781       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7460/nfs-server\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:19.530717       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-7718/ss-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:20.125089       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-3515/security-context-1d162dd3-a0b0-432b-af95-493c57b1e036\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:22.636588       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5542/pod-599f5fab-0473-4ed1-9bb1-062069a1e1cc\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:24.170268       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-4481/image-pull-test9cfcf3a4-1d33-432f-b839-b0bf7fb64ccb\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:25.158318       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1557/aws-client\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:25.648399       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5491/pod-subpath-test-inlinevolume-4cvj\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:25.868260       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3492/pod-subpath-test-preprovisionedpv-4g92\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:27.608771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6095/test-container-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:27.768498       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6095/host-test-container-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:28.332175       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7273/pod-subpath-test-dynamicpv-7mxm\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:29.033513       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1437/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:29.364097       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7128/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-v22rx\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:31.001805       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1437/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:31.332139       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8063/httpd\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:31.490999       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8733/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-nk6xs\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:32.154200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9275/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-l46w5\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:33.002629       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1437/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:33.827430       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6630-3933/csi-mockplugin-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:34.151448       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6630-3933/csi-mockplugin-attacher-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:34.494139       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-8298/explicit-root-uid\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:34.532130       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1801/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-kbmjx\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:35.578373       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-6636/pod-init-036cd6ab-c3e0-4256-b7bf-3f6ca006f823\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:35.835613       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1437/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:37.004101       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1437/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:37.307377       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7460/pvc-tester-xfnj4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:38.878945       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-3071/test-container-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:39.005306       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-1437/terminating-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:18:39.143073       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-1160/pod-79b83de0-b497-4daf-968e-930d7015a14a\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:39.913167       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5131/pod-2958214c-2dba-45e5-a7be-89ea10e0237e\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:40.261535       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1801/pod-ceab105b-d3f5-4521-abc0-c01e77226862\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:40.304408       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8733/pod-subpath-test-preprovisionedpv-x9jv\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:40.417476       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7128/pod-subpath-test-preprovisionedpv-nfgp\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:41.972857       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-3514/alpine-nnp-nil-bdd478f9-fa20-4c87-9252-607083d7634f\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:42.047365       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9275/pod-subpath-test-preprovisionedpv-9knv\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:42.646089       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"proxy-7828/agnhost\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:43.985025       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1864/pod-subpath-test-inlinevolume-jwpb\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:43.996511       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-3043/downwardapi-volume-b3397114-bcf9-4be2-8208-e8984d730a94\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:44.473644       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-ss86z\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:45.716017       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-9513/downwardapi-volume-289ecfbe-139b-481b-b5b0-2592b2163377\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:46.171474       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8063/success\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:46.968215       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9497/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-mh2hg\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:47.173857       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9026/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-4lxqr\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:47.244951       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1801/pod-e37f08ac-a756-4a4b-9a6d-6396a755ed90\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:48.275443       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1796/ss-1\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:50.044340       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9039/pod-subpath-test-inlinevolume-n24j\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:50.261910       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7915/affinity-nodeport-8x8g4\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:50.264223       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7915/affinity-nodeport-bk2t4\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:50.283789       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7915/affinity-nodeport-nwkxt\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:54.402117       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8733/pod-subpath-test-preprovisionedpv-x9jv\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:54.789772       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8063/failure-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:54.852404       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-3642/security-context-c14b06d3-8183-46a6-a375-29244f348f01\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:55.149966       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-85/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:55.314563       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-85/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:55.478074       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-85/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:55.640625       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-85/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:55.981048       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9026/pod-subpath-test-preprovisionedpv-rk7w\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:56.044677       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9497/pod-subpath-test-preprovisionedpv-gqjn\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:18:56.308735       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-661/pod-subpath-test-inlinevolume-v97n\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:56.730807       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-6447/security-context-f736fadd-e91c-425c-ae88-cda82e063a0d\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:57.558417       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"crd-webhook-8032/sample-crd-conversion-webhook-deployment-697cdbd8f4-nhb7p\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:58.267351       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7460/pvc-tester-nrqqj\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:18:59.898370       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8063/failure-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:01.115946       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"default/recycler-for-nfs-ss86z\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:02.744936       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5554/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-thzw5\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:03.413150       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-3463/ss-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:05.289570       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-695/test-orphan-deployment-847dcfb7fb-fpg5g\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:05.678494       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-5131/pod-e626076e-136e-49f8-b7a7-8ba8c8d51d30\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:06.122178       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-7915/execpod-affinity5grgb\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:07.374145       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6839/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-gwm5l\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:07.492615       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"csi-mock-volumes-6630/pvc-volume-tester-2g78r\" err=\"0/5 nodes are available: 1 node(s) did not have enough free storage, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:19:07.621890       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8841/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-9l2hn\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:08.197269       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-879/e2e-configmap-dns-server-6a8f565a-705e-4b8b-8310-8644c42ffd9e\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:08.270467       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-6252/sample-webhook-deployment-78988fc6cd-8wmc7\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:08.325931       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5554/pod-47872bef-c31f-4387-a814-a75f790f0143\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:11.795563       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-4180/pod-secrets-a0813f99-42e7-4a19-8f0e-ce35c8550d77\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:11.925890       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-2496/pod-projected-configmaps-a2e32a35-efe3-4cc3-83d9-844555565c93\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:12.231376       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-2745/busybox-user-0-8fb94c11-59b3-4ec6-adc3-ef3785ae663c\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:12.853522       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-879/e2e-dns-utils\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:13.410098       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5554/pod-debf61ec-a26d-4e48-9052-a391b596970b\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:13.450241       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8545/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-snq5b\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:14.363295       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-3463/ss-1\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:14.577739       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4458-9223/csi-mockplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:15.937667       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-6839/pod-820ea07b-2917-409c-a1ea-0aae119ecee9\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:19:17.031619       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-6839/pod-820ea07b-2917-409c-a1ea-0aae119ecee9\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:19:17.277170       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-9721/pod-ready\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:18.368572       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3274/inline-volume-jcfgz\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-jcfgz-my-volume\\\" not found.\"\nI0908 04:19:19.034502       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-6839/pod-820ea07b-2917-409c-a1ea-0aae119ecee9\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:19:19.045725       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"aggregator-8379/sample-apiserver-deployment-64f6b9dc99-8qrs7\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:19.334495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5125/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-zlt84\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:19.489357       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-85/test-container-pod\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:20.014624       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-6511/busybox-readonly-fs8511415e-8b81-4975-bf32-3d2530f2a80b\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:20.061192       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8545/pod-377644c4-b914-418d-9cff-9b8dafd1a60d\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:21.380440       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-6440/test-webserver-4ce3bf5c-0187-43f4-9405-cae90d968f0d\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:21.397013       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1796/ss-2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:21.992594       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7178/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-p8dsb\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:23.038480       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-6839/pod-820ea07b-2917-409c-a1ea-0aae119ecee9\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nE0908 04:19:23.043493       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pod-820ea07b-2917-409c-a1ea-0aae119ecee9.16a2bdd945c777d7\", GenerateName:\"\", Namespace:\"persistent-local-volumes-test-6839\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"persistent-local-volumes-test-6839\", Name:\"pod-820ea07b-2917-409c-a1ea-0aae119ecee9\", UID:\"f2839940-a3f6-437b-a87e-8342e1455551\", APIVersion:\"v1\", ResourceVersion:\"4906\", FieldPath:\"\"}, Reason:\"FailedScheduling\", Message:\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\", Source:v1.EventSource{Component:\"default-scheduler\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766671555, loc:(*time.Location)(0x30fd1c0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0462c52c24efcfc, ext:435520209402, loc:(*time.Location)(0x30fd1c0)}}, Count:4, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pod-820ea07b-2917-409c-a1ea-0aae119ecee9.16a2bdd945c777d7\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-6839 because it is being terminated' (will not retry!)\nI0908 04:19:23.455202       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-263/sample-webhook-deployment-78988fc6cd-wc969\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:23.797949       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-1065/pod-configmaps-7e10d941-6860-4a7e-89a5-d4db9f2da763\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:24.508677       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3274-80/csi-hostpath-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:25.003215       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3274-80/csi-hostpathplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:25.163906       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3274-80/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:25.335804       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3274-80/csi-hostpath-resizer-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:25.481725       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3274-80/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:25.955851       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3274/inline-volume-tester-thh9b\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-thh9b-my-volume-0\\\" not found.\"\nI0908 04:19:26.180549       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5125/pod-379c6be2-88a2-4e53-9d11-ab770f83ac34\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:26.868799       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-84/test-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:27.014908       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8545/pod-f8105738-d730-4bfb-a602-845dd4a97239\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:27.109600       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8841/pod-subpath-test-preprovisionedpv-8vkc\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:27.583186       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3513/pod-adoption-release\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:32.934885       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-3513/pod-adoption-release-6zx4q\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:33.354288       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5125/pod-1afbeddc-c7d7-4f46-8092-006c18e91f7b\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:33.734483       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9263/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:19:34.269427       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-3139/pod-init-3230bc7f-a132-4593-be14-541a3ad04505\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:34.586341       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-3017/pod-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:34.761937       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-3017/pod-1\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:35.045357       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9263/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:19:36.294962       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4025/httpd-deployment-948b4c64c-f94vc\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:36.343753       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4025/httpd-deployment-948b4c64c-krthp\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:36.488582       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5495/pod-subpath-test-dynamicpv-cn5h\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:36.524254       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2702/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-hwddk\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:36.558355       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1752/pod-f4124f6b-3818-4f13-b550-504bfbc0f912\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:36.693611       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-5961/test-pod-66864092-b0b9-403c-8169-ae22bfc8a95b\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:37.022910       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-3722/logs-generator\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:37.046734       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"resourcequota-9263/test-pod\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:19:37.925044       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-4458/pvc-volume-tester-zqs82\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:38.820637       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4025/httpd-deployment-948b4c64c-xs6bv\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:40.633152       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4025/httpd-deployment-8584777d8-rqz29\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:40.768516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7178/local-injector\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:41.176275       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4063/pod-handle-http-request\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:41.296107       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2702/pod-7113493b-7586-44f9-b95e-c7f91f5bedf4\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:41.740939       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-1842/pod-8b84f8be-3945-47cc-a5cd-7cbb1f25de63\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:42.323217       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-4661/bin-false63d87207-f92f-4276-abab-f48d55aca42d\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:43.155270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9711/pod1\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:43.404037       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2880/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-t6tbr\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:43.535568       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-206/pod-configmaps-54a51afa-f0c7-4869-a431-96f4cf88ac92\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:47.803100       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-5767/busybox-host-aliases7423521d-7365-4aee-b122-a31873b5288c\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:47.838302       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-2447/my-hostname-basic-9646b4ec-da1e-42a9-b098-eb52d02990f5-jxm9k\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:47.877003       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-4063/pod-with-poststart-http-hook\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:47.910535       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6490/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-frgkv\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:50.321129       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-hr8xc\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.328642       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-9g5hl\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.330268       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-vv7zt\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.343902       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-4fl44\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.345404       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-m4lvh\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.351476       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-lqk57\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.351573       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-wpw7j\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.368153       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-fkvhm\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.370091       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-rbpgh\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.370200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-8339/simpletest.rc-csmmx\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.393091       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-7938/my-hostname-basic-6c1cf279-094b-4e5f-9980-8814e01aef37-dhts8\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:50.416385       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2702/pod-6322a256-41c8-46e3-b589-7a2f35d490a3\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:50.681298       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7393/pod-37d13a0c-7d82-4331-97ac-f1623af78e48\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:51.046576       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3274/inline-volume-tester-thh9b\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:52.293087       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7445/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-jbg59\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:52.442795       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-9711/pod2\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:55.462377       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7393/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-qf9bw\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:55.544411       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2880/pod-subpath-test-preprovisionedpv-lsfk\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:56.760647       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6490/pod-subpath-test-preprovisionedpv-m67x\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:57.475304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-8987/busybox-readonly-true-4b9ff64e-5b87-4335-b52d-1e0ee52f6347\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:19:58.434749       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4395/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-cmlf8\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:59.322028       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8109/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-286bd\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:19:59.328857       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-7178/local-client\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:01.019747       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7622/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-nqngz\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:01.386742       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-7455/pfpod\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:02.120746       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-4347/pod-projected-configmaps-08fc9b4a-605f-4a98-b107-d98080931adf\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:02.244459       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-9615/pod1\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:02.412206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"endpointslice-9615/pod2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:03.363155       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1752/pvc-volume-tester-writer-cdzh4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:04.592973       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3274/inline-volume-tester2-7mclg\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester2-7mclg-my-volume-0\\\" not found.\"\nI0908 04:20:05.444085       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-145/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:05.559256       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4575/httpd\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:05.608277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-145/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:05.774233       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-145/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:05.936634       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-145/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:07.080593       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3274/inline-volume-tester2-7mclg\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:08.024185       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-4930/pod-subpath-test-configmap-rz2r\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:08.167100       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8063/failure-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:09.830311       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8109/pod-e754344b-6564-4a93-a336-2da053fe86eb\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:09.985744       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3796-8765/csi-mockplugin-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:10.302468       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3796-8765/csi-mockplugin-resizer-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:10.754551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6254/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-n725q\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:10.867734       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"tables-124/pod-1\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:11.517548       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-7622/pod-3544653c-0877-420f-bbb8-821180747c99\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:20:11.899028       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7445/pod-subpath-test-preprovisionedpv-p5tq\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:12.403566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4395/pod-d836fd41-338b-4e7c-a7e1-1f094dad276f\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:12.672047       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7124-561/csi-mockplugin-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:13.075640       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-7622/pod-3544653c-0877-420f-bbb8-821180747c99\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:20:13.850122       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-5525/image-pull-test2a79d1fd-6246-4559-beee-3be96ca8ca4d\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:14.879550       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-3107/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:14.930932       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3919/pod-configmaps-cb15fa1a-6302-4067-9271-f7548d814bd7\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:15.037350       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-3107/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:15.197551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-3107/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:15.212471       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-4395/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-lq2jp\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:15.351744       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-3107/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:15.471250       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"persistent-local-volumes-test-7622/pod-3544653c-0877-420f-bbb8-821180747c99\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had volume node affinity conflict, 3 node(s) didn't match Pod's node affinity/selector.\"\nI0908 04:20:17.113014       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-4575/run-log-test\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:19.419695       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-7124/pvc-volume-tester-b2tqd\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:22.814893       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8370/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-v667p\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:23.310526       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-1338/condition-test-httjm\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:23.331261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replicaset-1338/condition-test-hhcds\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:23.486119       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7206/pod-projected-configmaps-d894611b-a872-4eb2-8d62-5771b82a8d15\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:25.349325       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8791/nodeport-test-jkvk4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:25.360584       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8791/nodeport-test-26vqh\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:25.772124       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5605/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-2bqqx\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:27.265010       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6254/local-injector\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:27.463500       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8370/pod-5a8a709b-a545-4492-b344-21cac5267c76\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:27.558309       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-145/test-container-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:27.854745       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-3396/foo-lkcvl\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:27.862070       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-3396/foo-f95fq\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:28.054351       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1245/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-rmpzb\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:28.444888       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-9064/pod-adoption\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:28.456421       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3796/pvc-volume-tester-w8gln\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:28.699118       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-8791/execpodtbv82\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:30.209394       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6253/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-r8fq9\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:30.382058       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-5037/pod-projected-secrets-725a0d08-2b5b-4a24-be28-bbef6a2400c7\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:30.402836       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-3774/pod-secrets-31526aec-e6bd-4d95-a53b-96be7aa72142\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:32.359032       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8370/pod-c3fc06ea-c15f-4f6c-b704-e20d8f51906c\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:34.578064       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-4008/pfpod\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:34.622355       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1796/ss-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:34.859668       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1554/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-2xg87\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:37.039168       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6997/test-deployment-7b4c744884-72p8j\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:37.052032       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6997/test-deployment-7b4c744884-5d98p\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:37.577583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-1245/pod-3b3c47a6-306b-45a0-90d1-254d7434520c\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:38.646828       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2146/externalsvc-5p2pp\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:38.657847       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2146/externalsvc-275z7\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:38.982957       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-3107/test-container-pod\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:39.118928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-3107/host-test-container-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:39.278414       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6997/test-deployment-748588b7cd-wq8zt\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:41.197279       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6253/pod-subpath-test-preprovisionedpv-7fsz\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:41.408805       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1752/pvc-volume-tester-reader-dfhk6\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:41.511737       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-4076/pod-configmaps-f24def24-3aa3-4c13-b40a-499a4f5df168\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:41.554719       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-7507/security-context-87183106-50c2-4e75-84d0-5d3ecb8e2fa4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:41.669863       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1554/exec-volume-test-preprovisionedpv-ksxt\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:42.021618       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6997/test-deployment-748588b7cd-mhbx4\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:42.042494       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6997/test-deployment-85d87c6f4b-kt8zt\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:42.380244       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5605/pod-subpath-test-preprovisionedpv-9z2t\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:44.445059       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-6997/test-deployment-85d87c6f4b-ltrw4\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:45.508150       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2146/execpodh6d4z\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:46.057232       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3796/pvc-volume-tester-npkzk\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:47.612321       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6254/local-client\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:50.089631       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1796/ss-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:51.078451       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7906/pod-172c7c8c-f0ea-44bf-9163-895d5508466c\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:51.777645       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-6708-7304/csi-hostpath-attacher-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:52.212188       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-6708-7304/csi-hostpathplugin-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:52.362476       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-6708-7304/csi-hostpath-provisioner-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:52.511871       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-6708-7304/csi-hostpath-resizer-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:53.014524       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-6708-7304/csi-hostpath-snapshotter-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:53.251878       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-185/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-fxfdp\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:53.746376       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4906/pod-subpath-test-inlinevolume-7k4m\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:55.506369       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1801/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-mlntx\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:56.368779       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8874/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:56.566579       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8874/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:56.799741       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8874/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:57.052205       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8874/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:20:57.307384       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3703/pod-handle-http-request\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:20:58.309927       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3004/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-54pxp\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:00.160993       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4873/concurrent-27184581-6jshn\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:00.254029       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-3360/failed-jobs-history-limit-27184581-qphv5\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:01.959790       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3703/pod-with-poststart-exec-hook\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:04.659236       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2239-3152/csi-mockplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:04.875243       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-1832/pod-handle-http-request\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:04.974698       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2239-3152/csi-mockplugin-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:05.167309       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pvc-protection-9815/pvc-tester-6lnrx\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:07.799616       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-1475/pod-qos-class-8c268a20-ceec-4898-8f2c-dc1cafa94af1\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:08.143167       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1017/configmap-client\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:09.326390       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7243/nfs-server\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:09.440351       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-1796/ss-2\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:09.533757       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-1832/pod-with-prestop-exec-hook\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:10.463323       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7923/pod-projected-configmaps-0a6e7457-8d73-43ce-b0b7-bef0f1a651e4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:10.689544       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1801/local-injector\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:10.754662       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2239/pvc-volume-tester-rn79c\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:11.097916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-185/pod-43fa4432-277d-4f75-8499-bd403d01da56\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:11.394200       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3004/pod-84295f91-a83e-46fa-b804-2accb50f03e7\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:12.622089       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-864fb64577-rqm98\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:12.622659       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4419/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-f4hsh\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:12.645908       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-864fb64577-vz49h\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0908 04:21:12.646493       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-864fb64577-dsrz6\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0908 04:21:14.727066       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-6034/downwardapi-volume-961559eb-8d32-436c-84cb-fd34953aae3f\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:15.911560       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-185/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-s2csb\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:16.194891       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-3004/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-jjwtb\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:16.742394       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9011/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-26gvx\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:17.265502       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-7916/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:17.429093       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-7916/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:17.594677       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-7916/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:17.755769       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-7916/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:18.392329       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"pvc-protection-9815/pvc-tester-k6h5h\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"pvc-protections5frx\\\" is being deleted.\"\nI0908 04:21:18.651810       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-8874/test-container-pod\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:19.222768       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2239/inline-volume-78z4w\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:19.401867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4419/pod-5b9ffb05-f38c-432a-8794-e853d4cce430\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:21.218922       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9011/pod-19f0088e-7665-4e0b-9769-bad008e0a960\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:21.386583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-227/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-c8lxs\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:23.026749       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9756/nfs-server\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:27.151997       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-2864/pod-projected-configmaps-29bd5ef5-c4ab-4bb4-b39d-e7f456b96d63\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:27.519445       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-1801/local-client\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:28.962761       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9756/pvc-tester-vk9sb\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:30.463575       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2687/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-fjjzg\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:30.884704       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5573/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-fz5w6\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:31.127045       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-8113/dns-test-c2e18a2f-006a-4777-94e4-164b88728b09\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:31.942429       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9756/pvc-tester-rcrsc\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:32.497515       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4419/pod-c1913e9a-f490-45cb-8183-815ee8db1b29\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:33.130081       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-4415/security-context-03df724c-55cd-47f9-aed5-ef398d2a5923\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:33.134195       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-1955/security-context-5891998d-cd1e-4591-b403-666fe179971f\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:34.993170       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-9756/pvc-tester-l9scv\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:36.445839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2687/pod-bf70e716-3752-4fe9-afdf-ed8d37014276\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:40.426002       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7906/pod-a42d6fbf-83b1-45d5-bfc2-52c61f03a09b\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:40.866231       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-7243/pvc-tester-xl2qp\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:40.926399       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-227/local-injector\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:41.937338       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5573/pod-subpath-test-preprovisionedpv-kzbz\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:42.388813       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2918-7023/csi-hostpath-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:42.859129       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2918-7023/csi-hostpathplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:43.013052       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2918-7023/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:43.197346       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2918-7023/csi-hostpath-resizer-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:43.319406       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2918-7023/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:43.558644       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-7916/test-container-pod\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:43.720253       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-7916/host-test-container-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:44.270856       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"dns-8719/test-dns-nameservers\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:46.585608       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/service-proxy-disabled-w7lbh\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:46.591434       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/service-proxy-disabled-mtr8v\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:46.597904       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/service-proxy-disabled-phzx6\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:48.252435       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7974/pod-84d5b149-3069-48db-994e-220411493d6f\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:49.671133       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-2687/pod-b98dbcf8-07d8-4257-94cc-e1c7679d78a8\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:50.183447       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-8063/failure-4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:50.426386       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4166/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-pzcd2\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:50.865064       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2918/pod-37da3204-e8a7-453a-997d-c9db1827c28e\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:53.279676       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/service-proxy-toggled-t79xb\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:53.279903       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/service-proxy-toggled-c7sp5\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:53.285255       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/service-proxy-toggled-vs8f7\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:54.375378       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9116/inline-volume-ww42w\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-ww42w-my-volume\\\" not found.\"\nI0908 04:21:54.420090       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-8455/test-recreate-deployment-6cb8b65c46-w9ssp\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:56.217248       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9591/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-qgfhq\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:57.420173       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-8455/test-recreate-deployment-85d47dcb4-rhqcd\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:58.787610       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6625/pod-subpath-test-inlinevolume-mvp8\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:21:59.788380       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/verify-service-up-host-exec-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:21:59.835432       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6402-222/csi-mockplugin-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:00.150336       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-4873/concurrent-27184582-mncfd\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:00.174835       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-3360/failed-jobs-history-limit-27184582-jkjj4\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:00.176960       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6402-222/csi-mockplugin-attacher-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:00.375438       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-9116-8237/csi-hostpath-attacher-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:00.844931       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-9116-8237/csi-hostpathplugin-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:00.863284       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-965/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-x99h4\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:01.003507       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-9116-8237/csi-hostpath-provisioner-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:01.054374       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-2615/busybox-privileged-false-0ddd6e04-403a-4d81-9d0e-cb43bd19f9d7\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:01.172831       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-9116-8237/csi-hostpath-resizer-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:01.329638       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-9116-8237/csi-hostpath-snapshotter-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:01.650096       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-2918/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-2p86p\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:01.801120       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9116/inline-volume-tester-trprr\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-trprr-my-volume-0\\\" not found.\"\nI0908 04:22:03.159667       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9116/inline-volume-tester-trprr\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0908 04:22:03.294359       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-227/local-client\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:03.385500       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6112/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:03.553796       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9248/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-mc4bp\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:03.562480       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6112/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:03.711916       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6112/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:03.781531       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-3396/liveness-6dfae17e-0e80-4c9a-83a5-e0f84c983092\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:03.873482       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6112/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:04.275496       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/verify-service-up-exec-pod-8gxrp\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:05.160039       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9116/inline-volume-tester-trprr\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0908 04:22:06.823363       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-6402/pvc-volume-tester-2br4l\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:08.110413       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9396/hostpathsymlink-injector\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:09.185208       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-9116/inline-volume-tester-trprr\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:09.430451       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-1396/pod-secrets-4584b41d-a3a4-4f35-ab4a-330574c484b2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:10.864875       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4166/pod-subpath-test-preprovisionedpv-6f9d\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:11.143603       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"fsgroupchangepolicy-7974/pod-5c8b479c-e34c-4929-b17f-363bdf8da583\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:11.207846       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9248/pod-eacbb0c6-11a7-4225-a004-8aabbb8d3bc9\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:11.850822       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9591/pod-subpath-test-preprovisionedpv-kkqw\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:12.283944       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"subpath-3787/pod-subpath-test-configmap-54p9\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:12.647274       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-76/pod-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:12.697995       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-8539/test-cleanup-controller-q8rds\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:12.808459       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-76/pod-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:12.963753       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-76/pod-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:15.107380       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/verify-service-down-host-exec-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:15.363112       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-965/pod-862e1a61-bb1e-49ea-9b3b-bde844d952ed\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:17.675518       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-8539/test-cleanup-deployment-5b4d99b59b-qnmwm\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:18.127704       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3278/inline-volume-xl9v4\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-xl9v4-my-volume\\\" not found.\"\nI0908 04:22:18.321201       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-5ff6986c95-wxcbz\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0908 04:22:19.439504       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9591/pod-subpath-test-preprovisionedpv-kkqw\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:20.466509       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9396/hostpathsymlink-client\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:21.075565       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-5ff6986c95-thzfc\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0908 04:22:22.340636       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-2611/pod-service-account-fefd14cb-e57c-465f-86f0-7d75d97ade81\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:22.638341       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-4314/inline-volume-zp7mz\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-zp7mz-my-volume\\\" not found.\"\nI0908 04:22:23.194402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-5ff6986c95-4bh7v\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:23.751972       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/verify-service-down-host-exec-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:24.537183       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3278-6286/csi-hostpath-attacher-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:24.544141       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-9116/inline-volume-tester2-cdthv\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester2-cdthv-my-volume-0\\\" not found.\"\nI0908 04:22:24.989448       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3278-6286/csi-hostpathplugin-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:25.118518       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3278-6286/csi-hostpath-provisioner-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:25.283666       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3278-6286/csi-hostpath-resizer-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:25.457867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3278-6286/csi-hostpath-snapshotter-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:25.538176       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-6112/test-container-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:25.940155       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3278/inline-volume-tester-fcmmm\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-fcmmm-my-volume-0\\\" not found.\"\nI0908 04:22:25.964566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-103/pod-projected-secrets-ff5f6d21-43da-4d04-aa2c-468686e010b5\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:26.195715       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-9116/inline-volume-tester2-cdthv\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:27.190739       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3278/inline-volume-tester-fcmmm\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0908 04:22:27.333935       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-4199/pod-projected-configmaps-c21f06e9-51c0-4498-919d-b7944318daa3\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:27.526377       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-6702/pod-86f82806-354d-4f71-bbc5-d79c9b5c0baf\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:28.518317       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4314-6943/csi-hostpath-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:28.765490       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-9448/httpd\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:29.000675       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4314-6943/csi-hostpathplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:29.154428       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4314-6943/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:29.192102       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-3278/inline-volume-tester-fcmmm\" err=\"0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.\"\nI0908 04:22:29.309412       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4314-6943/csi-hostpath-resizer-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:29.469426       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4314-6943/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:29.647963       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-648/pod-c44f7f0b-d232-42cd-ad46-71a5c7310c7f\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:29.732997       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4551/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:29.892181       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4551/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:29.939888       1 factory.go:339] \"Unable to schedule pod; no fit; waiting\" pod=\"ephemeral-4314/inline-volume-tester-xznbf\" err=\"0/5 nodes are available: 5 persistentvolumeclaim \\\"inline-volume-tester-xznbf-my-volume-0\\\" not found.\"\nI0908 04:22:30.055063       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4551/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:30.209212       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4551/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:30.429489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/verify-service-up-host-exec-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:31.278699       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5951/pod-subpath-test-inlinevolume-6w5l\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:31.464039       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-8453/pfpod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:31.622315       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-7088/deployment-shared-unset-55bfccbb6c-tjw76\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:31.630615       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-7088/deployment-shared-unset-55bfccbb6c-hq5b7\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:31.636196       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"apply-7088/deployment-shared-unset-55bfccbb6c-8qzsq\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:32.012375       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-3122/pod-hostip-84e03f2a-3681-4f80-a973-dd11133b931e\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:33.207989       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-3278/inline-volume-tester-fcmmm\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:33.296780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3199/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-rq7sz\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:33.475469       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-59c4fc87b4-s2vl5\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0908 04:22:34.030676       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"init-container-712/pod-init-6af28d2c-4a81-4b1b-a215-1e13ee0521d8\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:34.769536       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1589-9476/csi-mockplugin-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:35.062794       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1589-9476/csi-mockplugin-attacher-0\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:36.118785       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-3834/all-succeed-nsml9\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:36.120198       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-3834/all-succeed-fdc7p\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:36.210780       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"ephemeral-4314/inline-volume-tester-xznbf\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:36.480755       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-1-w7cgv\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:36.500879       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-1-b4rvn\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:36.504580       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-1-bwnks\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:37.349974       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-6847/sample-webhook-deployment-78988fc6cd-z42p6\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:38.925209       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/verify-service-up-exec-pod-rfrzf\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:39.877252       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-336-9560/csi-mockplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:40.268960       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-336-9560/csi-mockplugin-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:41.638592       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-1589/pvc-volume-tester-xvpcc\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:41.725714       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2631/frontend-685fc574d5-gn8l6\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:41.737586       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2631/frontend-685fc574d5-gwwx5\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:41.759056       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2631/frontend-685fc574d5-jzjj2\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:42.647839       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2631/agnhost-primary-5db8ddd565-v89n4\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:43.414566       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-7652/pfpod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:43.547989       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-59c4fc87b4-2hnn8\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0908 04:22:43.599473       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2631/agnhost-replica-6bcf79b489-pzvkg\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:43.609978       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-2631/agnhost-replica-6bcf79b489-m5lqn\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:44.421713       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6688/pod-subpath-test-inlinevolume-zx6q\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:45.710382       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-3834/all-succeed-pnzdj\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:46.311234       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"job-3834/all-succeed-ql9z5\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:46.843282       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-336/pvc-volume-tester-h6tvg\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:47.977489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6841/simpletest.deployment-9858f564d-9mkc8\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:47.983626       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-6841/simpletest.deployment-9858f564d-ms9xv\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:48.859449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2757/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-5bkhj\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:49.011356       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-3820/test-pod\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:49.178463       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-2-fmkff\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:49.178975       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-2-phqcz\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:49.197118       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-2-xg2f9\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:51.276314       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-59c4fc87b4-t68t8\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:51.388773       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5338/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-fgzhf\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:52.556602       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3144/pod-handle-http-request\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:53.684416       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"e2e-kubelet-etc-hosts-3820/test-host-network-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:53.762556       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3920-8464/csi-mockplugin-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:54.002646       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4066/verify-service-down-host-exec-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:54.093124       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5730/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-6hcn8\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:54.640583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-686dff95d9-v7tjq\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=3\nI0908 04:22:55.068635       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7373/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-dbh78\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:22:57.947526       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-4551/test-container-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:58.342626       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:58.351185       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:22:58.351771       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:00.170327       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5912/test-pod-1\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:00.327583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5912/test-pod-2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:00.484868       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5912/test-pod-3\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:00.791150       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-686dff95d9-z2pxt\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0908 04:23:01.233686       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-lifecycle-hook-3144/pod-with-prestop-http-hook\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:01.688105       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/verify-service-up-host-exec-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:02.756886       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-146/kube-proxy-mode-detector\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:02.845718       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-455/test-rolling-update-with-lb-686dff95d9-zp5w8\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:03.477697       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-5805/explicit-nonroot-uid\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:04.204287       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"port-forwarding-5041/pfpod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:04.483480       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2946-3841/csi-mockplugin-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:04.956051       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-562/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-s787l\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:05.317620       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-1\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:06.386405       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-5338/pod-5e01fd6d-c5b7-4307-b105-35b867db7bbb\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:06.515206       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-1\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:07.309526       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-1\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:08.176137       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/verify-service-up-exec-pod-cx5xp\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:10.448080       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7373/pod-8857e452-d2a3-41a7-992b-ff8aa889f176\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:11.159110       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-5730/exec-volume-test-preprovisionedpv-grnn\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:11.391392       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2946/pvc-volume-tester-zfw9l\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:12.424009       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2757/pod-subpath-test-preprovisionedpv-5tm7\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:13.228553       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volumemode-7373/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-2hrjw\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:13.510208       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:13.832760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-824/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:13.995211       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-824/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:14.156372       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-824/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:14.210869       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2517/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-g2k9d\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:14.319146       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-824/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:14.721504       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:14.879803       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/verify-service-up-host-exec-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:15.628746       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-146/affinity-clusterip-timeout-4v9c8\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:15.642155       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-146/affinity-clusterip-timeout-spfv7\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:15.644760       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-146/affinity-clusterip-timeout-n4cbn\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:16.707480       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:17.072309       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3167-4340/csi-hostpath-attacher-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:17.121949       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3920/pvc-volume-tester-bpsb4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:17.592504       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3167-4340/csi-hostpathplugin-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:17.735876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3167-4340/csi-hostpath-provisioner-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:17.886496       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3167-4340/csi-hostpath-resizer-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:18.062358       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3167-4340/csi-hostpath-snapshotter-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:19.473807       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2531/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-6b65k\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:20.915270       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-3\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:23.737815       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-3\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:23.995644       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4561/pod-subpath-test-inlinevolume-5f94\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:25.399151       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-562/pod-subpath-test-preprovisionedpv-6gnm\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:25.419536       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2517/pod-subpath-test-preprovisionedpv-z77w\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:26.519820       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-3\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:27.543328       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-3728/pod-8d087956-2013-44dc-951a-a0bdf3ee2a33\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:27.723026       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:27.862696       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3167/pod-subpath-test-dynamicpv-j6k4\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:29.369679       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:29.728587       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-9406/pod-9501b8ac-e3a6-47ae-8cdc-3ddbce76a178\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:31.373339       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-4\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:34.108201       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-126-6152/csi-mockplugin-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:34.344259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-146/execpod-affinityhx6dh\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:34.366827       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3741-4582/csi-mockplugin-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:34.413793       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-126-6152/csi-mockplugin-attacher-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:34.681965       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3741-4582/csi-mockplugin-attacher-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:35.576822       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-5\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:36.143228       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8427/pod-subpath-test-inlinevolume-jqrj\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:36.600327       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-5\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:37.245180       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2980/externalname-service-xnkgx\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:37.245992       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2980/externalname-service-b2wfv\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:37.357033       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/verify-service-up-exec-pod-rp9sl\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:38.192449       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-5\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.543378       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-vdfcm\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.562907       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-rrlz9\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.602238       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-47lbm\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.602719       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-59hw8\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.605664       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-cm2q5\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.606388       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-xbff8\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.650495       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-vcxfb\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.715568       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-5k6zr\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.716332       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-sr9wl\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:40.716451       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-2168/simpletest.rc-47kqr\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:41.086299       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4717/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-28gks\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:42.222280       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pod-network-test-824/test-container-pod\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:42.560009       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2531/pod-subpath-test-preprovisionedpv-9gx9\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:44.141278       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"emptydir-9059/pod-12333b09-35e7-45f2-a08a-92c61b5901ab\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:45.206241       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svc-latency-7720/svc-latency-rc-xqqp7\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:45.954187       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-6\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:46.590031       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-2980/execpodwmdls\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:47.750902       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-6\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:49.930765       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-237/downwardapi-volume-d93c9910-1f63-45ba-9486-b1fabcd4246a\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:50.718169       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7977/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-gffrf\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:50.910536       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-7\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:51.356155       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-126/pvc-volume-tester-mgbzf\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:51.659793       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-3741/pvc-volume-tester-w8hsx\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:51.732303       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-6\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:52.374411       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"downward-api-70/downwardapi-volume-ba8daa8a-604a-4ad2-a078-b3c70bf08874\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:52.622680       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9927-9568/csi-hostpath-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:53.069995       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9927-9568/csi-hostpathplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:53.202467       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9927-9568/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:53.365259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9927-9568/csi-hostpath-resizer-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:53.528503       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9927-9568/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:54.470403       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-6297/pod-release-rxvs8\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:55.154295       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"replication-controller-6297/pod-release-l9jh2\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:55.404668       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-4717/pod-0d250597-9068-48da-9e86-f125b70b5400\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:55.550516       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubelet-test-7216/bin-false79ff2e8f-3f70-4f75-aae7-903971f6d422\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:55.929901       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1172/simpletest.deployment-76b58b9b6c-hkstz\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:55.936850       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"gc-1172/simpletest.deployment-76b58b9b6c-4sbbj\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:57.024610       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-1772/security-context-dc0c175b-acea-4a0e-89c7-9039cddbb5af\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:57.403969       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7977/pod-subpath-test-preprovisionedpv-9442\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:23:57.706532       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-7\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:59.784783       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"sysctl-2020/sysctl-c147fc99-8c13-466b-9b45-0580fd4655df\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:59.855455       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-8\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:23:59.890222       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pvc-protection-9450/pvc-tester-fhkjm\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:00.196672       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"cronjob-6268/successful-jobs-history-limit-27184584-6ldnz\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:03.330378       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-9927/pod-subpath-test-dynamicpv-kjpv\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:04.596441       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-8\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:04.832570       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/verify-service-down-host-exec-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:05.084687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-7\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:05.336339       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1297/netserver-0\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:05.494897       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1297/netserver-1\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:05.653269       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1297/netserver-2\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:05.809121       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1297/netserver-3\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:06.579369       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-4890/busybox-ae5e486b-93ad-45fe-8a6d-ec9bb3c82eb4\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:07.046866       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"hostpath-8120/pod-host-path-test\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:07.620450       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3070/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-glkq2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:07.991687       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2196/hostexec-ip-172-20-47-217.ap-northeast-2.compute.internal-njcqf\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:09.196478       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-3479/pod-configmaps-128f957c-52c2-480c-9ea6-434c400e301e\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:10.763669       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4910/aws-injector\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:11.909282       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-8\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:13.314178       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-9\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:14.593716       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-9\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:15.918160       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-9606/implicit-root-uid\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:16.138781       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-4639/hairpin\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:17.084709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kubectl-6614/httpd\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:19.123851       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/verify-service-up-host-exec-pod\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:19.299913       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-10\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:20.034384       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-7377/test-pod-f23de375-69eb-4b68-9340-3d562a5ad457\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:20.485764       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2864/hostexec-ip-172-20-48-118.ap-northeast-2.compute.internal-mfxtr\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:21.263544       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3070/pod-4211e4ab-f155-444a-bcdf-409886c528c5\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:21.411369       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2327-7019/csi-mockplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:21.558418       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2327-7019/csi-mockplugin-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:21.718749       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2327-7019/csi-mockplugin-resizer-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:21.905842       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-10\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:22.369164       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-1041/pod-configmaps-60703c1a-96be-4aaa-a543-2bf2e890ebbb\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:24.076974       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1760/affinity-clusterip-transition-44f5p\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:24.088096       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1760/affinity-clusterip-transition-gfmhq\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:24.103018       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1760/affinity-clusterip-transition-plz88\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:25.781842       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2196/pod-subpath-test-preprovisionedpv-922n\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:26.335141       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-9\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:26.703500       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-11\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:27.103130       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2864/pod-subpath-test-preprovisionedpv-4rwm\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:28.121591       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2327/pvc-volume-tester-jrcfv\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:29.160052       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-4819/ss2-0\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:29.302692       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-11\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:30.681372       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-12\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:31.371785       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1297/test-container-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:31.535810       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"nettest-1297/host-test-container-pod\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:32.689260       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-10\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:33.600248       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/verify-service-up-exec-pod-mzsnr\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:33.741923       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-1760/execpod-affinity9dfn5\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:34.007656       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-8046/busybox-privileged-true-7541944f-d5c0-4331-9759-ffdd6cc8177e\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:34.024613       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-7377/test-pod-f23de375-69eb-4b68-9340-3d562a5ad457\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:34.099571       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-3070/pod-95d37538-a9a7-43d0-a084-3d103aebe056\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:34.189095       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-4819/ss2-1\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:34.445561       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7732/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-w6w5c\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:35.578508       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-2196/pod-subpath-test-preprovisionedpv-922n\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:36.420956       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-283-951/csi-hostpath-attacher-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:36.905772       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-283-951/csi-hostpathplugin-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:37.062304       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-283-951/csi-hostpath-provisioner-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:37.135097       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-13\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:37.223429       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-283-951/csi-hostpath-resizer-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:37.415213       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-283-951/csi-hostpath-snapshotter-0\" node=\"ip-172-20-48-118.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:38.050548       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1145/pod-subpath-test-inlinevolume-k66v\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:38.688609       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-8684/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-4s6l8\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:39.749741       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"statefulset-4819/ss2-2\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:39.939038       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-2-12\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:41.328807       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-1-11\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:41.533251       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"svcaccounts-7377/test-pod-f23de375-69eb-4b68-9340-3d562a5ad457\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:42.098544       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"containers-939/client-containers-9701e78e-2015-46d7-8d4b-50b5875e996f\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:42.220928       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-9188/pod-012d146b-0436-47e8-a6ff-46abd0863baf\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:43.078676       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-4910/aws-client\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:44.228103       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-6878/hostexec-ip-172-20-53-124.ap-northeast-2.compute.internal-fmgfk\" node=\"ip-172-20-53-124.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:44.902059       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-8440/hostexec-ip-172-20-61-194.ap-northeast-2.compute.internal-m4k7c\" node=\"ip-172-20-61-194.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0908 04:24:45.440254       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"configmap-9697/pod-configmaps-392a653e-8c67-4034-944f-9ce3ab7cc9de\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:45.675433       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-6300/pod-submit-status-0-14\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:45.711259       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"webhook-5268/sample-webhook-deployment-78988fc6cd-m2jmc\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:46.487402       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-3-mcf7n\" node=\"ip-172-20-47-217.ap-northeast-2.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0908 04:24:46.523740       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"services-6112/up-down-3-tbd95\" n