This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-08-26 00:30
Elapsed40m43s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0826 00:32:38.019331    4131 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0826 00:32:38.021136    4131 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.22.0-alpha.3+v1.22.0-alpha.2-271-g1d7eca2b93/linux/amd64/kops
I0826 00:32:38.951393    4131 up.go:43] Cleaning up any leaked resources from previous cluster
I0826 00:32:38.951434    4131 dumplogs.go:38] /logs/artifacts/bdfb00d4-0604-11ec-99f5-c2ede4b31aac/kops toolbox dump --name e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user admin
I0826 00:32:38.983184    4151 featureflag.go:168] FeatureFlag "SpecOverrideFlag"=true
I0826 00:32:38.983365    4151 featureflag.go:168] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io" not found
W0826 00:32:39.537962    4131 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0826 00:32:39.538018    4131 down.go:48] /logs/artifacts/bdfb00d4-0604-11ec-99f5-c2ede4b31aac/kops delete cluster --name e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --yes
I0826 00:32:39.565466    4160 featureflag.go:168] FeatureFlag "SpecOverrideFlag"=true
I0826 00:32:39.565852    4160 featureflag.go:168] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io" not found
I0826 00:32:40.064304    4131 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/08/26 00:32:40 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0826 00:32:40.075778    4131 http.go:37] curl https://ip.jsb.workers.dev
I0826 00:32:40.219937    4131 up.go:144] /logs/artifacts/bdfb00d4-0604-11ec-99f5-c2ede4b31aac/kops create cluster --name e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.14 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=136693071363/debian-10-amd64-20210721-710 --channel=alpha --networking=kubenet --container-runtime=containerd --admin-access 35.192.90.167/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ap-northeast-2a --master-size c5.large
I0826 00:32:40.241447    4170 featureflag.go:168] FeatureFlag "SpecOverrideFlag"=true
I0826 00:32:40.241640    4170 featureflag.go:168] FeatureFlag "AlphaAllowGCE"=true
I0826 00:32:40.286058    4170 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0826 00:32:40.948088    4170 new_cluster.go:1055]  Cloud Provider ID = aws
... skipping 41 lines ...

I0826 00:33:08.487821    4131 up.go:181] /logs/artifacts/bdfb00d4-0604-11ec-99f5-c2ede4b31aac/kops validate cluster --name e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0826 00:33:08.507257    4191 featureflag.go:168] FeatureFlag "SpecOverrideFlag"=true
I0826 00:33:08.507402    4191 featureflag.go:168] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io

W0826 00:33:09.913669    4191 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0826 00:33:19.950241    4191 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:33:30.018418    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:33:40.054060    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:33:50.113655    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:34:00.154024    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:34:10.186291    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:34:20.223183    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:34:30.258874    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:34:40.314173    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:34:50.344675    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:35:00.374051    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:35:10.422131    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:35:20.451298    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:35:30.492129    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:35:40.524971    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:35:50.554991    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:36:00.586049    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:36:10.615258    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:36:20.645105    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:36:30.696465    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:36:40.727076    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:36:50.765844    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
W0826 00:37:00.813950    4191 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0826 00:37:10.845514    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 9 lines ...
Machine	i-096c5d8c993f8b9a9							machine "i-096c5d8c993f8b9a9" has not yet joined cluster
Node	ip-172-20-54-134.ap-northeast-2.compute.internal			master "ip-172-20-54-134.ap-northeast-2.compute.internal" is missing kube-scheduler pod
Pod	kube-system/kube-dns-696cb84c7-jwxd8					system-cluster-critical pod "kube-dns-696cb84c7-jwxd8" is pending
Pod	kube-system/kube-dns-autoscaler-55f8f75459-kxc27			system-cluster-critical pod "kube-dns-autoscaler-55f8f75459-kxc27" is pending
Pod	kube-system/kube-proxy-ip-172-20-54-134.ap-northeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-54-134.ap-northeast-2.compute.internal" is pending

Validation Failed
W0826 00:37:24.732568    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-61-11.ap-northeast-2.compute.internal	node "ip-172-20-61-11.ap-northeast-2.compute.internal" of role "node" is not ready
Pod	kube-system/kube-dns-696cb84c7-jwxd8		system-cluster-critical pod "kube-dns-696cb84c7-jwxd8" is pending
Pod	kube-system/kube-dns-696cb84c7-rpz8c		system-cluster-critical pod "kube-dns-696cb84c7-rpz8c" is pending

Validation Failed
W0826 00:37:37.450907    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
W0826 00:37:47.468845    4191 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

NODE STATUS
... skipping 20 lines ...
ip-172-20-62-60.ap-northeast-2.compute.internal		node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-61-11.ap-northeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-61-11.ap-northeast-2.compute.internal" is pending

Validation Failed
W0826 00:38:12.666950    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 6 lines ...
ip-172-20-62-60.ap-northeast-2.compute.internal		node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-62-163.ap-northeast-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-62-163.ap-northeast-2.compute.internal" is pending

Validation Failed
W0826 00:38:25.342715    4191 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ap-northeast-2a	Master	c5.large	1	1	ap-northeast-2a
nodes-ap-northeast-2a	Node	t3.medium	4	4	ap-northeast-2a

... skipping 1372 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1439
------------------------------
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:40:59.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-842" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:00.229: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 74 lines ...
STEP: Destroying namespace "pod-disks-7848" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [2.490 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:448

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:76
------------------------------
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:00.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3445" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:00.790: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:00.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-281" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:01.037: INFO: Driver local doesn't support ext4 -- skipping
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:00.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6073" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:01.300: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 181 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl diff
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:871
    should check if kubectl diff finds a difference for Deployments [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 3 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:64
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 00:40:59.110: INFO: Waiting up to 5m0s for pod "pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4" in namespace "emptydir-4284" to be "Succeeded or Failed"
Aug 26 00:40:59.266: INFO: Pod "pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 155.717155ms
Aug 26 00:41:01.425: INFO: Pod "pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314728091s
Aug 26 00:41:03.581: INFO: Pod "pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471154936s
Aug 26 00:41:05.738: INFO: Pod "pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.627474804s
STEP: Saw pod success
Aug 26 00:41:05.738: INFO: Pod "pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4" satisfied condition "Succeeded or Failed"
Aug 26 00:41:05.894: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4 container test-container: <nil>
STEP: delete the pod
Aug 26 00:41:06.231: INFO: Waiting for pod pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4 to disappear
Aug 26 00:41:06.391: INFO: Pod pod-1eaba835-ca0f-4053-8d46-8afad8fa3de4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:64
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:06.874: INFO: >>> kubeConfig: /root/.kube/config
... skipping 31 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 00:40:59.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13" in namespace "downward-api-7642" to be "Succeeded or Failed"
Aug 26 00:40:59.344: INFO: Pod "downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13": Phase="Pending", Reason="", readiness=false. Elapsed: 166.076897ms
Aug 26 00:41:01.507: INFO: Pod "downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328135074s
Aug 26 00:41:03.669: INFO: Pod "downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.490256159s
Aug 26 00:41:05.831: INFO: Pod "downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.652645041s
Aug 26 00:41:07.993: INFO: Pod "downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.814811799s
STEP: Saw pod success
Aug 26 00:41:07.993: INFO: Pod "downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13" satisfied condition "Succeeded or Failed"
Aug 26 00:41:08.156: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13 container client-container: <nil>
STEP: delete the pod
Aug 26 00:41:08.498: INFO: Waiting for pod downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13 to disappear
Aug 26 00:41:08.661: INFO: Pod downwardapi-volume-9939c2ef-79aa-44dd-864d-b534aaf3ea13 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.117 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:09.164: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 135 lines ...
• [SLOW TEST:13.465 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:11.483: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-54381be8-9467-4f17-bc7f-ea8befd1289d
STEP: Creating a pod to test consume configMaps
Aug 26 00:41:09.082: INFO: Waiting up to 5m0s for pod "pod-configmaps-048e5178-2862-49de-84ab-e01e48310f25" in namespace "configmap-987" to be "Succeeded or Failed"
Aug 26 00:41:09.237: INFO: Pod "pod-configmaps-048e5178-2862-49de-84ab-e01e48310f25": Phase="Pending", Reason="", readiness=false. Elapsed: 155.615522ms
Aug 26 00:41:11.394: INFO: Pod "pod-configmaps-048e5178-2862-49de-84ab-e01e48310f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31178355s
STEP: Saw pod success
Aug 26 00:41:11.394: INFO: Pod "pod-configmaps-048e5178-2862-49de-84ab-e01e48310f25" satisfied condition "Succeeded or Failed"
Aug 26 00:41:11.551: INFO: Trying to get logs from node ip-172-20-62-60.ap-northeast-2.compute.internal pod pod-configmaps-048e5178-2862-49de-84ab-e01e48310f25 container configmap-volume-test: <nil>
STEP: delete the pod
Aug 26 00:41:11.875: INFO: Waiting for pod pod-configmaps-048e5178-2862-49de-84ab-e01e48310f25 to disappear
Aug 26 00:41:12.031: INFO: Pod pod-configmaps-048e5178-2862-49de-84ab-e01e48310f25 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:12.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-987" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:12.378: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 33 lines ...
Aug 26 00:41:07.268: INFO: Creating a PV followed by a PVC
Aug 26 00:41:07.585: INFO: Waiting for PV local-pvvlpxt to bind to PVC pvc-jsmtz
Aug 26 00:41:07.585: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-jsmtz] to have phase Bound
Aug 26 00:41:07.743: INFO: PersistentVolumeClaim pvc-jsmtz found and phase=Bound (158.048447ms)
Aug 26 00:41:07.743: INFO: Waiting up to 3m0s for PersistentVolume local-pvvlpxt to have phase Bound
Aug 26 00:41:07.901: INFO: PersistentVolume local-pvvlpxt found and phase=Bound (158.01829ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:361
STEP: local-volume-type: dir
STEP: Initializing test volumes
Aug 26 00:41:08.217: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-1f5e6bb9-dbd6-456e-8f25-9882a5bead56] Namespace:persistent-local-volumes-test-6461 PodName:hostexec-ip-172-20-60-101.ap-northeast-2.compute.internal-twfln ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Aug 26 00:41:08.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:14.355 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:339
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:361
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:12.434: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:13.453: INFO: Only supported for providers [openstack] (not aws)
... skipping 71 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
Aug 26 00:40:59.002: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 26 00:40:59.002: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-6k42
STEP: Creating a pod to test exec-volume-test
Aug 26 00:40:59.160: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-6k42" in namespace "volume-9809" to be "Succeeded or Failed"
Aug 26 00:40:59.315: INFO: Pod "exec-volume-test-inlinevolume-6k42": Phase="Pending", Reason="", readiness=false. Elapsed: 154.973043ms
Aug 26 00:41:01.471: INFO: Pod "exec-volume-test-inlinevolume-6k42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310610212s
Aug 26 00:41:03.626: INFO: Pod "exec-volume-test-inlinevolume-6k42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.466088987s
Aug 26 00:41:05.782: INFO: Pod "exec-volume-test-inlinevolume-6k42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62191497s
Aug 26 00:41:07.938: INFO: Pod "exec-volume-test-inlinevolume-6k42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.777694447s
Aug 26 00:41:10.094: INFO: Pod "exec-volume-test-inlinevolume-6k42": Phase="Pending", Reason="", readiness=false. Elapsed: 10.934074427s
Aug 26 00:41:12.250: INFO: Pod "exec-volume-test-inlinevolume-6k42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.089872547s
STEP: Saw pod success
Aug 26 00:41:12.250: INFO: Pod "exec-volume-test-inlinevolume-6k42" satisfied condition "Succeeded or Failed"
Aug 26 00:41:12.406: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod exec-volume-test-inlinevolume-6k42 container exec-container-inlinevolume-6k42: <nil>
STEP: delete the pod
Aug 26 00:41:12.723: INFO: Waiting for pod exec-volume-test-inlinevolume-6k42 to disappear
Aug 26 00:41:12.878: INFO: Pod exec-volume-test-inlinevolume-6k42 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-6k42
Aug 26 00:41:12.878: INFO: Deleting pod "exec-volume-test-inlinevolume-6k42" in namespace "volume-9809"
... skipping 33 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:13.507: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 65 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1222
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
Aug 26 00:41:11.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 00:41:12.446: INFO: Waiting up to 5m0s for pod "pod-6718c958-3bd7-476a-accb-5434de78ab32" in namespace "emptydir-1134" to be "Succeeded or Failed"
Aug 26 00:41:12.602: INFO: Pod "pod-6718c958-3bd7-476a-accb-5434de78ab32": Phase="Pending", Reason="", readiness=false. Elapsed: 156.461139ms
Aug 26 00:41:14.759: INFO: Pod "pod-6718c958-3bd7-476a-accb-5434de78ab32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.313488486s
STEP: Saw pod success
Aug 26 00:41:14.759: INFO: Pod "pod-6718c958-3bd7-476a-accb-5434de78ab32" satisfied condition "Succeeded or Failed"
Aug 26 00:41:14.916: INFO: Trying to get logs from node ip-172-20-62-60.ap-northeast-2.compute.internal pod pod-6718c958-3bd7-476a-accb-5434de78ab32 container test-container: <nil>
STEP: delete the pod
Aug 26 00:41:15.235: INFO: Waiting for pod pod-6718c958-3bd7-476a-accb-5434de78ab32 to disappear
Aug 26 00:41:15.392: INFO: Pod pod-6718c958-3bd7-476a-accb-5434de78ab32 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:15.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1134" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:15.719: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 96 lines ...
STEP: Destroying namespace "services-7647" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:17.093: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
Aug 26 00:41:13.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 00:41:14.456: INFO: Waiting up to 5m0s for pod "pod-d266aada-8d40-4122-9707-40ec5ee4cc62" in namespace "emptydir-784" to be "Succeeded or Failed"
Aug 26 00:41:14.612: INFO: Pod "pod-d266aada-8d40-4122-9707-40ec5ee4cc62": Phase="Pending", Reason="", readiness=false. Elapsed: 155.683044ms
Aug 26 00:41:16.767: INFO: Pod "pod-d266aada-8d40-4122-9707-40ec5ee4cc62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.311514655s
STEP: Saw pod success
Aug 26 00:41:16.768: INFO: Pod "pod-d266aada-8d40-4122-9707-40ec5ee4cc62" satisfied condition "Succeeded or Failed"
Aug 26 00:41:16.923: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-d266aada-8d40-4122-9707-40ec5ee4cc62 container test-container: <nil>
STEP: delete the pod
Aug 26 00:41:17.240: INFO: Waiting for pod pod-d266aada-8d40-4122-9707-40ec5ee4cc62 to disappear
Aug 26 00:41:17.396: INFO: Pod pod-d266aada-8d40-4122-9707-40ec5ee4cc62 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 10 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:68
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 26 00:41:14.457: INFO: Waiting up to 5m0s for pod "pod-8339eb8b-f14f-413c-a2f7-63ad3dde29ea" in namespace "emptydir-7650" to be "Succeeded or Failed"
Aug 26 00:41:14.612: INFO: Pod "pod-8339eb8b-f14f-413c-a2f7-63ad3dde29ea": Phase="Pending", Reason="", readiness=false. Elapsed: 154.909571ms
Aug 26 00:41:16.767: INFO: Pod "pod-8339eb8b-f14f-413c-a2f7-63ad3dde29ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.310066413s
STEP: Saw pod success
Aug 26 00:41:16.767: INFO: Pod "pod-8339eb8b-f14f-413c-a2f7-63ad3dde29ea" satisfied condition "Succeeded or Failed"
Aug 26 00:41:16.922: INFO: Trying to get logs from node ip-172-20-62-60.ap-northeast-2.compute.internal pod pod-8339eb8b-f14f-413c-a2f7-63ad3dde29ea container test-container: <nil>
STEP: delete the pod
Aug 26 00:41:17.245: INFO: Waiting for pod pod-8339eb8b-f14f-413c-a2f7-63ad3dde29ea to disappear
Aug 26 00:41:17.400: INFO: Pod pod-8339eb8b-f14f-413c-a2f7-63ad3dde29ea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:17.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7650" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:17.734: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:18.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9019" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:112
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 140 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:18.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-d4c35668-fd8d-4112-abb6-e0d9459d2ace
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:19.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1853" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:21.589: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 21 lines ...
Aug 26 00:41:17.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Aug 26 00:41:18.701: INFO: Waiting up to 5m0s for pod "downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0" in namespace "downward-api-3940" to be "Succeeded or Failed"
Aug 26 00:41:18.857: INFO: Pod "downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0": Phase="Pending", Reason="", readiness=false. Elapsed: 155.645954ms
Aug 26 00:41:21.013: INFO: Pod "downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311870648s
Aug 26 00:41:23.169: INFO: Pod "downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.468006078s
STEP: Saw pod success
Aug 26 00:41:23.169: INFO: Pod "downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0" satisfied condition "Succeeded or Failed"
Aug 26 00:41:23.325: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0 container dapi-container: <nil>
STEP: delete the pod
Aug 26 00:41:23.643: INFO: Waiting for pod downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0 to disappear
Aug 26 00:41:23.799: INFO: Pod downward-api-dcdde901-00ca-4dae-825d-f09ecef326a0 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.357 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:21.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Aug 26 00:41:22.545: INFO: Waiting up to 5m0s for pod "downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f" in namespace "downward-api-6552" to be "Succeeded or Failed"
Aug 26 00:41:22.701: INFO: Pod "downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f": Phase="Pending", Reason="", readiness=false. Elapsed: 156.649897ms
Aug 26 00:41:24.859: INFO: Pod "downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314061141s
Aug 26 00:41:27.016: INFO: Pod "downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.47185515s
STEP: Saw pod success
Aug 26 00:41:27.017: INFO: Pod "downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f" satisfied condition "Succeeded or Failed"
Aug 26 00:41:27.173: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f container dapi-container: <nil>
STEP: delete the pod
Aug 26 00:41:27.494: INFO: Waiting for pod downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f to disappear
Aug 26 00:41:27.650: INFO: Pod downward-api-001ec32e-30fe-4023-81de-7b3392f7f11f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.364 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:27.983: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 64 lines ...
Aug 26 00:41:16.420: INFO: PersistentVolumeClaim pvc-bzqg7 found but phase is Pending instead of Bound.
Aug 26 00:41:18.580: INFO: PersistentVolumeClaim pvc-bzqg7 found and phase=Bound (2.317437507s)
Aug 26 00:41:18.580: INFO: Waiting up to 3m0s for PersistentVolume local-c6wn8 to have phase Bound
Aug 26 00:41:18.744: INFO: PersistentVolume local-c6wn8 found and phase=Bound (164.449717ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tnqd
STEP: Creating a pod to test subpath
Aug 26 00:41:19.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tnqd" in namespace "provisioning-7285" to be "Succeeded or Failed"
Aug 26 00:41:19.378: INFO: Pod "pod-subpath-test-preprovisionedpv-tnqd": Phase="Pending", Reason="", readiness=false. Elapsed: 158.060622ms
Aug 26 00:41:21.536: INFO: Pod "pod-subpath-test-preprovisionedpv-tnqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316411031s
Aug 26 00:41:23.695: INFO: Pod "pod-subpath-test-preprovisionedpv-tnqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.474941581s
STEP: Saw pod success
Aug 26 00:41:23.695: INFO: Pod "pod-subpath-test-preprovisionedpv-tnqd" satisfied condition "Succeeded or Failed"
Aug 26 00:41:23.853: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-tnqd container test-container-subpath-preprovisionedpv-tnqd: <nil>
STEP: delete the pod
Aug 26 00:41:24.182: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tnqd to disappear
Aug 26 00:41:24.340: INFO: Pod pod-subpath-test-preprovisionedpv-tnqd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tnqd
Aug 26 00:41:24.340: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tnqd" in namespace "provisioning-7285"
... skipping 35 lines ...
Aug 26 00:41:01.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
Aug 26 00:41:01.896: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 26 00:41:02.231: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6236" in namespace "provisioning-6236" to be "Succeeded or Failed"
Aug 26 00:41:02.389: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 158.265302ms
Aug 26 00:41:04.548: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317111026s
Aug 26 00:41:06.707: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47561241s
Aug 26 00:41:08.872: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640894253s
Aug 26 00:41:11.033: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 8.801692081s
Aug 26 00:41:13.191: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 10.960199645s
Aug 26 00:41:15.349: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 13.118606401s
Aug 26 00:41:17.511: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.279726647s
STEP: Saw pod success
Aug 26 00:41:17.511: INFO: Pod "hostpath-symlink-prep-provisioning-6236" satisfied condition "Succeeded or Failed"
Aug 26 00:41:17.511: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6236" in namespace "provisioning-6236"
Aug 26 00:41:17.687: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6236" to be fully deleted
Aug 26 00:41:17.849: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-z4b6
Aug 26 00:41:20.326: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-6236 exec pod-subpath-test-inlinevolume-z4b6 --container test-container-volume-inlinevolume-z4b6 -- /bin/sh -c rm -r /test-volume/provisioning-6236'
Aug 26 00:41:21.977: INFO: stderr: ""
Aug 26 00:41:21.977: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-z4b6
Aug 26 00:41:21.977: INFO: Deleting pod "pod-subpath-test-inlinevolume-z4b6" in namespace "provisioning-6236"
Aug 26 00:41:22.137: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-z4b6" to be fully deleted
STEP: Deleting pod
Aug 26 00:41:30.453: INFO: Deleting pod "pod-subpath-test-inlinevolume-z4b6" in namespace "provisioning-6236"
Aug 26 00:41:30.771: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6236" in namespace "provisioning-6236" to be "Succeeded or Failed"
Aug 26 00:41:30.929: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Pending", Reason="", readiness=false. Elapsed: 158.117379ms
Aug 26 00:41:33.087: INFO: Pod "hostpath-symlink-prep-provisioning-6236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316749061s
STEP: Saw pod success
Aug 26 00:41:33.087: INFO: Pod "hostpath-symlink-prep-provisioning-6236" satisfied condition "Succeeded or Failed"
Aug 26 00:41:33.087: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6236" in namespace "provisioning-6236"
Aug 26 00:41:33.255: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6236" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:33.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6236" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:33.760: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362
Aug 26 00:41:18.064: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888" in namespace "security-context-test-6013" to be "Succeeded or Failed"
Aug 26 00:41:18.221: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Pending", Reason="", readiness=false. Elapsed: 156.336594ms
Aug 26 00:41:20.380: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.315219242s
Aug 26 00:41:22.536: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471967716s
Aug 26 00:41:24.695: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Pending", Reason="", readiness=false. Elapsed: 6.630163296s
Aug 26 00:41:26.852: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Pending", Reason="", readiness=false. Elapsed: 8.78713429s
Aug 26 00:41:29.013: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Pending", Reason="", readiness=false. Elapsed: 10.948364815s
Aug 26 00:41:31.170: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Pending", Reason="", readiness=false. Elapsed: 13.105177942s
Aug 26 00:41:33.326: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.261945016s
Aug 26 00:41:33.327: INFO: Pod "alpine-nnp-true-9a498593-71ae-4eb1-909d-ef5a95588888" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:33.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6013" for this suite.


... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:362
------------------------------
S
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:33.815: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 399 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:33.854: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 193 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:36.576: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 82 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:38.261: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Aug 26 00:41:33.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 00:41:34.808: INFO: Waiting up to 5m0s for pod "pod-757627b8-18a0-4e12-b366-db8b3965c3c4" in namespace "emptydir-630" to be "Succeeded or Failed"
Aug 26 00:41:34.966: INFO: Pod "pod-757627b8-18a0-4e12-b366-db8b3965c3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 158.103348ms
Aug 26 00:41:37.124: INFO: Pod "pod-757627b8-18a0-4e12-b366-db8b3965c3c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316724996s
Aug 26 00:41:39.283: INFO: Pod "pod-757627b8-18a0-4e12-b366-db8b3965c3c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475591409s
STEP: Saw pod success
Aug 26 00:41:39.283: INFO: Pod "pod-757627b8-18a0-4e12-b366-db8b3965c3c4" satisfied condition "Succeeded or Failed"
Aug 26 00:41:39.441: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-757627b8-18a0-4e12-b366-db8b3965c3c4 container test-container: <nil>
STEP: delete the pod
Aug 26 00:41:39.762: INFO: Waiting for pod pod-757627b8-18a0-4e12-b366-db8b3965c3c4 to disappear
Aug 26 00:41:39.920: INFO: Pod pod-757627b8-18a0-4e12-b366-db8b3965c3c4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.389 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:40.273: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 126 lines ...
• [SLOW TEST:13.040 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:41.070: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 77 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:41.164: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 68 lines ...
Aug 26 00:41:33.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Aug 26 00:41:34.846: INFO: Waiting up to 5m0s for pod "downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc" in namespace "downward-api-6466" to be "Succeeded or Failed"
Aug 26 00:41:35.003: INFO: Pod "downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 156.942683ms
Aug 26 00:41:37.161: INFO: Pod "downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314456254s
Aug 26 00:41:39.318: INFO: Pod "downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472091182s
Aug 26 00:41:41.476: INFO: Pod "downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.630098708s
STEP: Saw pod success
Aug 26 00:41:41.477: INFO: Pod "downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc" satisfied condition "Succeeded or Failed"
Aug 26 00:41:41.642: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc container dapi-container: <nil>
STEP: delete the pod
Aug 26 00:41:41.965: INFO: Waiting for pod downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc to disappear
Aug 26 00:41:42.122: INFO: Pod downward-api-9ddeabbf-d19f-49a6-b81e-105cf0f06ffc no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.535 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:42.449: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 55 lines ...
Aug 26 00:41:43.619: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.125 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:141

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 52 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Aug 26 00:41:21.020: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 26 00:41:21.176: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-djtb
STEP: Creating a pod to test subpath
Aug 26 00:41:21.334: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-djtb" in namespace "provisioning-2875" to be "Succeeded or Failed"
Aug 26 00:41:21.489: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 155.334949ms
Aug 26 00:41:23.647: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31329268s
Aug 26 00:41:25.803: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468672598s
Aug 26 00:41:27.958: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623934834s
Aug 26 00:41:30.113: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.779420083s
Aug 26 00:41:32.269: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.935197898s
Aug 26 00:41:34.425: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.090946048s
Aug 26 00:41:36.581: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.246668881s
Aug 26 00:41:38.741: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.406975832s
Aug 26 00:41:40.896: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.562490653s
Aug 26 00:41:43.052: INFO: Pod "pod-subpath-test-inlinevolume-djtb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.718061662s
STEP: Saw pod success
Aug 26 00:41:43.052: INFO: Pod "pod-subpath-test-inlinevolume-djtb" satisfied condition "Succeeded or Failed"
Aug 26 00:41:43.207: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-djtb container test-container-volume-inlinevolume-djtb: <nil>
STEP: delete the pod
Aug 26 00:41:43.526: INFO: Waiting for pod pod-subpath-test-inlinevolume-djtb to disappear
Aug 26 00:41:43.681: INFO: Pod pod-subpath-test-inlinevolume-djtb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-djtb
Aug 26 00:41:43.681: INFO: Deleting pod "pod-subpath-test-inlinevolume-djtb" in namespace "provisioning-2875"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":30,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:44.346: INFO: Only supported for providers [gce gke] (not aws)
... skipping 215 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330
Aug 26 00:41:42.158: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-9a0f74d4-5fc4-4158-a95a-a1d0a0d7982a" in namespace "security-context-test-7492" to be "Succeeded or Failed"
Aug 26 00:41:42.319: INFO: Pod "alpine-nnp-nil-9a0f74d4-5fc4-4158-a95a-a1d0a0d7982a": Phase="Pending", Reason="", readiness=false. Elapsed: 161.370674ms
Aug 26 00:41:44.481: INFO: Pod "alpine-nnp-nil-9a0f74d4-5fc4-4158-a95a-a1d0a0d7982a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323142629s
Aug 26 00:41:46.643: INFO: Pod "alpine-nnp-nil-9a0f74d4-5fc4-4158-a95a-a1d0a0d7982a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.485469973s
Aug 26 00:41:46.643: INFO: Pod "alpine-nnp-nil-9a0f74d4-5fc4-4158-a95a-a1d0a0d7982a" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:46.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7492" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:330
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:42.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Aug 26 00:41:43.165: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-23a31340-adf6-4768-bc23-36e219e93a52" in namespace "security-context-test-1981" to be "Succeeded or Failed"
Aug 26 00:41:43.327: INFO: Pod "busybox-privileged-false-23a31340-adf6-4768-bc23-36e219e93a52": Phase="Pending", Reason="", readiness=false. Elapsed: 161.4744ms
Aug 26 00:41:45.484: INFO: Pod "busybox-privileged-false-23a31340-adf6-4768-bc23-36e219e93a52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319189762s
Aug 26 00:41:47.662: INFO: Pod "busybox-privileged-false-23a31340-adf6-4768-bc23-36e219e93a52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.497180711s
Aug 26 00:41:47.662: INFO: Pod "busybox-privileged-false-23a31340-adf6-4768-bc23-36e219e93a52" satisfied condition "Succeeded or Failed"
Aug 26 00:41:47.822: INFO: Got logs for pod "busybox-privileged-false-23a31340-adf6-4768-bc23-36e219e93a52": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:41:47.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1981" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:48.150: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver windows-gcepd doesn't support  -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:174
------------------------------
... skipping 25 lines ...
STEP: Creating a kubernetes client
Aug 26 00:41:12.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: creating the pod
Aug 26 00:41:13.169: INFO: PodSpec: initContainers in spec.initContainers
Aug 26 00:41:41.003: FAIL: Expected
    <*errors.errorString | 0xc0037b2d60>: {
        s: "second init container should have reason PodInitializing: v1.ContainerStatus{Name:\"init1\", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037eef00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:\"docker.io/library/busybox:1.29\", ImageID:\"\", ContainerID:\"\", Started:(*bool)(nil)}",
    }
to be nil

Full Stack Trace
... skipping 10 lines ...
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "init-container-7710".
STEP: Found 6 events.
Aug 26 00:41:41.160: INFO: At 2021-08-26 00:41:13 +0000 UTC - event for pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b: {default-scheduler } Scheduled: Successfully assigned init-container-7710/pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:41:41.160: INFO: At 2021-08-26 00:41:14 +0000 UTC - event for pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 00:41:41.160: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:41:41.160: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:41:41.160: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 00:41:41.160: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:41:41.319: INFO: POD                                            NODE                                             PHASE    GRACE  CONDITIONS
Aug 26 00:41:41.319: INFO: pod-init-3ec68c31-4172-4a10-8418-65a0fe359b5b  ip-172-20-62-60.ap-northeast-2.compute.internal  Pending         [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:13 +0000 UTC ContainersNotInitialized containers with incomplete status: [init1 init2]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:13 +0000 UTC ContainersNotReady containers with unready status: [run1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:13 +0000 UTC ContainersNotReady containers with unready status: [run1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:13 +0000 UTC  }]
Aug 26 00:41:41.319: INFO: 
Aug 26 00:41:41.477: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 00:41:41.640: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 2138 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 00:41:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 00:41:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 00:41:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 00:41:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 189 lines ...
STEP: Destroying namespace "init-container-7710" for this suite.


• Failure [36.099 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597

  Aug 26 00:41:41.003: Expected
      <*errors.errorString | 0xc0037b2d60>: {
          s: "second init container should have reason PodInitializing: v1.ContainerStatus{Name:\"init1\", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037eef00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:\"docker.io/library/busybox:1.29\", ImageID:\"\", ContainerID:\"\", Started:(*bool)(nil)}",
      }
  to be nil

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:547
------------------------------
{"msg":"FAILED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":2,"skipped":4,"failed":1,"failures":["[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:48.499: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 73 lines ...
• [SLOW TEST:12.162 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:50.473: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 174 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:28.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47
    should be mountable
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:41:52.475: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:48.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Aug 26 00:41:49.500: INFO: Waiting up to 5m0s for pod "security-context-415e1ee6-0f62-4652-8a59-9281d3425d57" in namespace "security-context-9104" to be "Succeeded or Failed"
Aug 26 00:41:49.656: INFO: Pod "security-context-415e1ee6-0f62-4652-8a59-9281d3425d57": Phase="Pending", Reason="", readiness=false. Elapsed: 155.890809ms
Aug 26 00:41:51.812: INFO: Pod "security-context-415e1ee6-0f62-4652-8a59-9281d3425d57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312358589s
Aug 26 00:41:53.968: INFO: Pod "security-context-415e1ee6-0f62-4652-8a59-9281d3425d57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468536512s
Aug 26 00:41:56.125: INFO: Pod "security-context-415e1ee6-0f62-4652-8a59-9281d3425d57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.624803059s
STEP: Saw pod success
Aug 26 00:41:56.125: INFO: Pod "security-context-415e1ee6-0f62-4652-8a59-9281d3425d57" satisfied condition "Succeeded or Failed"
Aug 26 00:41:56.280: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod security-context-415e1ee6-0f62-4652-8a59-9281d3425d57 container test-container: <nil>
STEP: delete the pod
Aug 26 00:41:56.598: INFO: Waiting for pod security-context-415e1ee6-0f62-4652-8a59-9281d3425d57 to disappear
Aug 26 00:41:56.754: INFO: Pod security-context-415e1ee6-0f62-4652-8a59-9281d3425d57 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.509 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":3,"skipped":14,"failed":1,"failures":["[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]"]}

S
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":4,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:16.037 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":5,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:00.520: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 45 lines ...
• [SLOW TEST:8.076 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:00.616: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 31 lines ...
• [SLOW TEST:7.843 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:00.829: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 142 lines ...
• [SLOW TEST:64.793 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:56
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 26 00:41:58.025: INFO: Waiting up to 5m0s for pod "pod-3144d79d-6a05-4474-92e0-a11ad42163ca" in namespace "emptydir-8150" to be "Succeeded or Failed"
Aug 26 00:41:58.181: INFO: Pod "pod-3144d79d-6a05-4474-92e0-a11ad42163ca": Phase="Pending", Reason="", readiness=false. Elapsed: 155.813508ms
Aug 26 00:42:00.337: INFO: Pod "pod-3144d79d-6a05-4474-92e0-a11ad42163ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311684973s
Aug 26 00:42:02.495: INFO: Pod "pod-3144d79d-6a05-4474-92e0-a11ad42163ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.469409548s
STEP: Saw pod success
Aug 26 00:42:02.495: INFO: Pod "pod-3144d79d-6a05-4474-92e0-a11ad42163ca" satisfied condition "Succeeded or Failed"
Aug 26 00:42:02.651: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-3144d79d-6a05-4474-92e0-a11ad42163ca container test-container: <nil>
STEP: delete the pod
Aug 26 00:42:02.969: INFO: Waiting for pod pod-3144d79d-6a05-4474-92e0-a11ad42163ca to disappear
Aug 26 00:42:03.124: INFO: Pod pod-3144d79d-6a05-4474-92e0-a11ad42163ca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
    new files should be created with FSGroup ownership when container is non-root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:56
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":4,"skipped":15,"failed":1,"failures":["[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:03.448: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:04.947: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:05.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-4523" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":2,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:05.559: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 123 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:05.715: INFO: Driver vsphere doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 00:42:01.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e" in namespace "downward-api-5700" to be "Succeeded or Failed"
Aug 26 00:42:01.620: INFO: Pod "downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 159.327977ms
Aug 26 00:42:03.775: INFO: Pod "downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314914594s
Aug 26 00:42:05.931: INFO: Pod "downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.470326792s
STEP: Saw pod success
Aug 26 00:42:05.931: INFO: Pod "downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e" satisfied condition "Succeeded or Failed"
Aug 26 00:42:06.086: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e container client-container: <nil>
STEP: delete the pod
Aug 26 00:42:06.419: INFO: Waiting for pod downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e to disappear
Aug 26 00:42:06.577: INFO: Pod downwardapi-volume-6b43069e-ee83-4e0d-a1ab-0a3864e48e9e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.364 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:06.903: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 89 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:72
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 26 00:42:01.601: INFO: Waiting up to 5m0s for pod "pod-65b17a98-bc53-46ee-b05e-1c61585967fe" in namespace "emptydir-7260" to be "Succeeded or Failed"
Aug 26 00:42:01.759: INFO: Pod "pod-65b17a98-bc53-46ee-b05e-1c61585967fe": Phase="Pending", Reason="", readiness=false. Elapsed: 158.389458ms
Aug 26 00:42:03.917: INFO: Pod "pod-65b17a98-bc53-46ee-b05e-1c61585967fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316475449s
Aug 26 00:42:06.075: INFO: Pod "pod-65b17a98-bc53-46ee-b05e-1c61585967fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.474432191s
STEP: Saw pod success
Aug 26 00:42:06.075: INFO: Pod "pod-65b17a98-bc53-46ee-b05e-1c61585967fe" satisfied condition "Succeeded or Failed"
Aug 26 00:42:06.237: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-65b17a98-bc53-46ee-b05e-1c61585967fe container test-container: <nil>
STEP: delete the pod
Aug 26 00:42:06.572: INFO: Waiting for pod pod-65b17a98-bc53-46ee-b05e-1c61585967fe to disappear
Aug 26 00:42:06.730: INFO: Pod pod-65b17a98-bc53-46ee-b05e-1c61585967fe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:72
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":5,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:07.073: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 68 lines ...
Aug 26 00:41:36.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
Aug 26 00:41:37.392: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 26 00:41:37.704: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3932" in namespace "provisioning-3932" to be "Succeeded or Failed"
Aug 26 00:41:37.858: INFO: Pod "hostpath-symlink-prep-provisioning-3932": Phase="Pending", Reason="", readiness=false. Elapsed: 153.546966ms
Aug 26 00:41:40.012: INFO: Pod "hostpath-symlink-prep-provisioning-3932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.307909686s
STEP: Saw pod success
Aug 26 00:41:40.012: INFO: Pod "hostpath-symlink-prep-provisioning-3932" satisfied condition "Succeeded or Failed"
Aug 26 00:41:40.012: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3932" in namespace "provisioning-3932"
Aug 26 00:41:40.170: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3932" to be fully deleted
Aug 26 00:41:40.323: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-kcqf
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 00:41:40.477: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-kcqf" in namespace "provisioning-3932" to be "Succeeded or Failed"
Aug 26 00:41:40.630: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Pending", Reason="", readiness=false. Elapsed: 153.214223ms
Aug 26 00:41:42.784: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30722634s
Aug 26 00:41:44.938: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 4.46083619s
Aug 26 00:41:47.092: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 6.614823745s
Aug 26 00:41:49.246: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 8.768505904s
Aug 26 00:41:51.399: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 10.922153975s
Aug 26 00:41:53.553: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 13.076219764s
Aug 26 00:41:55.707: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 15.229849801s
Aug 26 00:41:57.861: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 17.383568118s
Aug 26 00:42:00.015: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 19.537315605s
Aug 26 00:42:02.169: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Running", Reason="", readiness=true. Elapsed: 21.691752659s
Aug 26 00:42:04.323: INFO: Pod "pod-subpath-test-inlinevolume-kcqf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.845598997s
STEP: Saw pod success
Aug 26 00:42:04.323: INFO: Pod "pod-subpath-test-inlinevolume-kcqf" satisfied condition "Succeeded or Failed"
Aug 26 00:42:04.476: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-kcqf container test-container-subpath-inlinevolume-kcqf: <nil>
STEP: delete the pod
Aug 26 00:42:04.809: INFO: Waiting for pod pod-subpath-test-inlinevolume-kcqf to disappear
Aug 26 00:42:04.966: INFO: Pod pod-subpath-test-inlinevolume-kcqf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-kcqf
Aug 26 00:42:04.966: INFO: Deleting pod "pod-subpath-test-inlinevolume-kcqf" in namespace "provisioning-3932"
STEP: Deleting pod
Aug 26 00:42:05.120: INFO: Deleting pod "pod-subpath-test-inlinevolume-kcqf" in namespace "provisioning-3932"
Aug 26 00:42:05.428: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3932" in namespace "provisioning-3932" to be "Succeeded or Failed"
Aug 26 00:42:05.582: INFO: Pod "hostpath-symlink-prep-provisioning-3932": Phase="Pending", Reason="", readiness=false. Elapsed: 153.580856ms
Aug 26 00:42:07.735: INFO: Pod "hostpath-symlink-prep-provisioning-3932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.307181691s
STEP: Saw pod success
Aug 26 00:42:07.735: INFO: Pod "hostpath-symlink-prep-provisioning-3932" satisfied condition "Succeeded or Failed"
Aug 26 00:42:07.735: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3932" in namespace "provisioning-3932"
Aug 26 00:42:07.897: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3932" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:08.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3932" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:08.371: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
Aug 26 00:42:10.397: INFO: AfterEach: Cleaning up test resources.
Aug 26 00:42:10.397: INFO: pvc is nil
Aug 26 00:42:10.397: INFO: Deleting PersistentVolume "hostpath-xtbjt"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:10.562: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 72 lines ...
STEP: Wait for the deployment to be ready
Aug 26 00:42:03.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 00:42:05.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535322, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 26 00:42:08.545: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:09.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7571" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:10.476 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:10.839: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 132 lines ...
Aug 26 00:42:00.819: INFO: PersistentVolumeClaim pvc-cr2mf found but phase is Pending instead of Bound.
Aug 26 00:42:02.979: INFO: PersistentVolumeClaim pvc-cr2mf found and phase=Bound (13.136042071s)
Aug 26 00:42:02.979: INFO: Waiting up to 3m0s for PersistentVolume local-2s7bh to have phase Bound
Aug 26 00:42:03.138: INFO: PersistentVolume local-2s7bh found and phase=Bound (159.363954ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4scf
STEP: Creating a pod to test subpath
Aug 26 00:42:03.618: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4scf" in namespace "provisioning-7269" to be "Succeeded or Failed"
Aug 26 00:42:03.777: INFO: Pod "pod-subpath-test-preprovisionedpv-4scf": Phase="Pending", Reason="", readiness=false. Elapsed: 159.453704ms
Aug 26 00:42:05.937: INFO: Pod "pod-subpath-test-preprovisionedpv-4scf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319301488s
Aug 26 00:42:08.098: INFO: Pod "pod-subpath-test-preprovisionedpv-4scf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.479508257s
STEP: Saw pod success
Aug 26 00:42:08.098: INFO: Pod "pod-subpath-test-preprovisionedpv-4scf" satisfied condition "Succeeded or Failed"
Aug 26 00:42:08.257: INFO: Trying to get logs from node ip-172-20-62-60.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-4scf container test-container-volume-preprovisionedpv-4scf: <nil>
STEP: delete the pod
Aug 26 00:42:08.583: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4scf to disappear
Aug 26 00:42:08.743: INFO: Pod pod-subpath-test-preprovisionedpv-4scf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4scf
Aug 26 00:42:08.743: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4scf" in namespace "provisioning-7269"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":53,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:11.016: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 122 lines ...
• [SLOW TEST:6.865 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:12.470: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 59 lines ...
• [SLOW TEST:13.272 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:14.216: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 83 lines ...
Aug 26 00:42:07.149: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1397 explain e2e-test-crd-publish-openapi-4990-crds.spec'
Aug 26 00:42:07.863: INFO: stderr: ""
Aug 26 00:42:07.863: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4990-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Aug 26 00:42:07.863: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1397 explain e2e-test-crd-publish-openapi-4990-crds.spec.bars'
Aug 26 00:42:08.567: INFO: stderr: ""
Aug 26 00:42:08.567: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4990-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Aug 26 00:42:08.567: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1397 explain e2e-test-crd-publish-openapi-4990-crds.spec.bars2'
Aug 26 00:42:09.752: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:16.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1397" for this suite.
... skipping 2 lines ...
• [SLOW TEST:29.398 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:16.583: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 182 lines ...
Aug 26 00:41:34.618: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-3104-aws-sc9dwx8
STEP: creating a claim
Aug 26 00:41:34.775: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-cnms
STEP: Creating a pod to test exec-volume-test
Aug 26 00:41:35.247: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-cnms" in namespace "volume-3104" to be "Succeeded or Failed"
Aug 26 00:41:35.403: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 156.517509ms
Aug 26 00:41:37.560: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313344451s
Aug 26 00:41:39.717: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469978648s
Aug 26 00:41:41.874: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626880778s
Aug 26 00:41:44.029: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 8.781725611s
Aug 26 00:41:46.184: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 10.936761342s
... skipping 2 lines ...
Aug 26 00:41:52.692: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 17.445496589s
Aug 26 00:41:54.848: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 19.601147868s
Aug 26 00:41:57.006: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Pending", Reason="", readiness=false. Elapsed: 21.758722803s
Aug 26 00:41:59.161: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Running", Reason="", readiness=true. Elapsed: 23.913827615s
Aug 26 00:42:01.316: INFO: Pod "exec-volume-test-dynamicpv-cnms": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.068647375s
STEP: Saw pod success
Aug 26 00:42:01.316: INFO: Pod "exec-volume-test-dynamicpv-cnms" satisfied condition "Succeeded or Failed"
Aug 26 00:42:01.470: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod exec-volume-test-dynamicpv-cnms container exec-container-dynamicpv-cnms: <nil>
STEP: delete the pod
Aug 26 00:42:01.787: INFO: Waiting for pod exec-volume-test-dynamicpv-cnms to disappear
Aug 26 00:42:01.942: INFO: Pod exec-volume-test-dynamicpv-cnms no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-cnms
Aug 26 00:42:01.942: INFO: Deleting pod "exec-volume-test-dynamicpv-cnms" in namespace "volume-3104"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":23,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":17,"failed":1,"failures":["[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:18.999: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 95 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:752
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:805
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":1,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:26.296: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:42:19.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:7.345 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 34 lines ...
Aug 26 00:41:46.092: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9346
Aug 26 00:41:46.251: INFO: creating *v1.StatefulSet: csi-mock-volumes-9346-5736/csi-mockplugin-attacher
Aug 26 00:41:46.410: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9346"
Aug 26 00:41:46.569: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9346 to register on node ip-172-20-60-101.ap-northeast-2.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Aug 26 00:41:54.847: INFO: Error getting logs for pod inline-volume-9qqm7: the server rejected our request for an unknown reason (get pods inline-volume-9qqm7)
Aug 26 00:41:54.847: INFO: Deleting pod "inline-volume-9qqm7" in namespace "csi-mock-volumes-9346"
Aug 26 00:41:55.006: INFO: Wait up to 5m0s for pod "inline-volume-9qqm7" to be fully deleted
STEP: Deleting the previously created pod
Aug 26 00:42:01.323: INFO: Deleting pod "pvc-volume-tester-cp9vt" in namespace "csi-mock-volumes-9346"
Aug 26 00:42:01.482: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cp9vt" to be fully deleted
STEP: Checking CSI driver logs
Aug 26 00:42:09.962: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Aug 26 00:42:09.963: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-cp9vt
Aug 26 00:42:09.963: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-9346
Aug 26 00:42:09.963: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: d5066d21-ab65-410d-ab80-724a7637adea
Aug 26 00:42:09.963: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Aug 26 00:42:09.963: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-baa399b6c41d511c5e38928013b82222a9d17bc7913ec096c62e0e90fef803b0","target_path":"/var/lib/kubelet/pods/d5066d21-ab65-410d-ab80-724a7637adea/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-cp9vt
Aug 26 00:42:09.963: INFO: Deleting pod "pvc-volume-tester-cp9vt" in namespace "csi-mock-volumes-9346"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-9346
STEP: Waiting for namespaces [csi-mock-volumes-9346] to vanish
STEP: uninstalling csi mock driver
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:308
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:358
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":4,"skipped":46,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:27.602: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 182 lines ...
• [SLOW TEST:40.683 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2655
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":6,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:28.893: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:31.749: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 46 lines ...
Aug 26 00:42:27.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Aug 26 00:42:28.640: INFO: Waiting up to 5m0s for pod "security-context-a6756757-9c69-4c62-83de-77df8b66e072" in namespace "security-context-6524" to be "Succeeded or Failed"
Aug 26 00:42:28.798: INFO: Pod "security-context-a6756757-9c69-4c62-83de-77df8b66e072": Phase="Pending", Reason="", readiness=false. Elapsed: 158.101578ms
Aug 26 00:42:30.958: INFO: Pod "security-context-a6756757-9c69-4c62-83de-77df8b66e072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.318328293s
STEP: Saw pod success
Aug 26 00:42:30.958: INFO: Pod "security-context-a6756757-9c69-4c62-83de-77df8b66e072" satisfied condition "Succeeded or Failed"
Aug 26 00:42:31.116: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod security-context-a6756757-9c69-4c62-83de-77df8b66e072 container test-container: <nil>
STEP: delete the pod
Aug 26 00:42:31.441: INFO: Waiting for pod security-context-a6756757-9c69-4c62-83de-77df8b66e072 to disappear
Aug 26 00:42:31.599: INFO: Pod security-context-a6756757-9c69-4c62-83de-77df8b66e072 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:31.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-6524" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":5,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:31.931: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 176 lines ...
• [SLOW TEST:21.382 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":76,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 53 lines ...
• [SLOW TEST:27.774 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:949
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":2,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:32.822: INFO: Driver windows-gcepd doesn't support ext3 -- skipping
... skipping 103 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 00:42:32.983: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74794925-e6c5-45ef-9184-18632aa6593e" in namespace "projected-6790" to be "Succeeded or Failed"
Aug 26 00:42:33.141: INFO: Pod "downwardapi-volume-74794925-e6c5-45ef-9184-18632aa6593e": Phase="Pending", Reason="", readiness=false. Elapsed: 158.074455ms
Aug 26 00:42:35.300: INFO: Pod "downwardapi-volume-74794925-e6c5-45ef-9184-18632aa6593e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316961508s
STEP: Saw pod success
Aug 26 00:42:35.300: INFO: Pod "downwardapi-volume-74794925-e6c5-45ef-9184-18632aa6593e" satisfied condition "Succeeded or Failed"
Aug 26 00:42:35.459: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod downwardapi-volume-74794925-e6c5-45ef-9184-18632aa6593e container client-container: <nil>
STEP: delete the pod
Aug 26 00:42:35.783: INFO: Waiting for pod downwardapi-volume-74794925-e6c5-45ef-9184-18632aa6593e to disappear
Aug 26 00:42:35.941: INFO: Pod downwardapi-volume-74794925-e6c5-45ef-9184-18632aa6593e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:35.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6790" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:36.323: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 153 lines ...
Aug 26 00:41:37.193: INFO: PersistentVolumeClaim csi-hostpath9nmtr found but phase is Pending instead of Bound.
Aug 26 00:41:39.351: INFO: PersistentVolumeClaim csi-hostpath9nmtr found but phase is Pending instead of Bound.
Aug 26 00:41:41.509: INFO: PersistentVolumeClaim csi-hostpath9nmtr found but phase is Pending instead of Bound.
Aug 26 00:41:43.668: INFO: PersistentVolumeClaim csi-hostpath9nmtr found and phase=Bound (36.850284021s)
STEP: Creating pod pod-subpath-test-dynamicpv-rkv5
STEP: Creating a pod to test subpath
Aug 26 00:41:44.144: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-rkv5" in namespace "provisioning-8888" to be "Succeeded or Failed"
Aug 26 00:41:44.302: INFO: Pod "pod-subpath-test-dynamicpv-rkv5": Phase="Pending", Reason="", readiness=false. Elapsed: 158.238812ms
Aug 26 00:41:46.461: INFO: Pod "pod-subpath-test-dynamicpv-rkv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317179811s
Aug 26 00:41:48.619: INFO: Pod "pod-subpath-test-dynamicpv-rkv5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.475530504s
Aug 26 00:41:50.778: INFO: Pod "pod-subpath-test-dynamicpv-rkv5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.634161735s
Aug 26 00:41:52.937: INFO: Pod "pod-subpath-test-dynamicpv-rkv5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.792916331s
Aug 26 00:41:55.101: INFO: Pod "pod-subpath-test-dynamicpv-rkv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.957159469s
STEP: Saw pod success
Aug 26 00:41:55.101: INFO: Pod "pod-subpath-test-dynamicpv-rkv5" satisfied condition "Succeeded or Failed"
Aug 26 00:41:55.259: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-rkv5 container test-container-subpath-dynamicpv-rkv5: <nil>
STEP: delete the pod
Aug 26 00:41:55.589: INFO: Waiting for pod pod-subpath-test-dynamicpv-rkv5 to disappear
Aug 26 00:41:55.747: INFO: Pod pod-subpath-test-dynamicpv-rkv5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-rkv5
Aug 26 00:41:55.747: INFO: Deleting pod "pod-subpath-test-dynamicpv-rkv5" in namespace "provisioning-8888"
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:36.927: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
Aug 26 00:42:36.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 26 00:42:38.038: INFO: Waiting up to 5m0s for pod "pod-d270f78a-ba81-4795-9116-ae54ea0bb7e7" in namespace "emptydir-7379" to be "Succeeded or Failed"
Aug 26 00:42:38.219: INFO: Pod "pod-d270f78a-ba81-4795-9116-ae54ea0bb7e7": Phase="Pending", Reason="", readiness=false. Elapsed: 181.509831ms
Aug 26 00:42:40.458: INFO: Pod "pod-d270f78a-ba81-4795-9116-ae54ea0bb7e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.420770261s
STEP: Saw pod success
Aug 26 00:42:40.458: INFO: Pod "pod-d270f78a-ba81-4795-9116-ae54ea0bb7e7" satisfied condition "Succeeded or Failed"
Aug 26 00:42:40.628: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-d270f78a-ba81-4795-9116-ae54ea0bb7e7 container test-container: <nil>
STEP: delete the pod
Aug 26 00:42:40.956: INFO: Waiting for pod pod-d270f78a-ba81-4795-9116-ae54ea0bb7e7 to disappear
Aug 26 00:42:41.114: INFO: Pod pod-d270f78a-ba81-4795-9116-ae54ea0bb7e7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:41.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7379" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:41.468: INFO: Driver local doesn't support ext4 -- skipping
... skipping 60 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1303
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":5,"skipped":80,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:42:34.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":6,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:42.330: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 212 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:236
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [sig-scheduling] LimitRange
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 123 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:48.960: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 81 lines ...
• [SLOW TEST:8.859 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:50.420: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":7,"skipped":83,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:42:47.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Aug 26 00:42:48.270: INFO: Waiting up to 5m0s for pod "var-expansion-f0649adb-b5e4-4927-b42d-d7d2036661fe" in namespace "var-expansion-9349" to be "Succeeded or Failed"
Aug 26 00:42:48.428: INFO: Pod "var-expansion-f0649adb-b5e4-4927-b42d-d7d2036661fe": Phase="Pending", Reason="", readiness=false. Elapsed: 158.103105ms
Aug 26 00:42:50.587: INFO: Pod "var-expansion-f0649adb-b5e4-4927-b42d-d7d2036661fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316747992s
STEP: Saw pod success
Aug 26 00:42:50.587: INFO: Pod "var-expansion-f0649adb-b5e4-4927-b42d-d7d2036661fe" satisfied condition "Succeeded or Failed"
Aug 26 00:42:50.745: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod var-expansion-f0649adb-b5e4-4927-b42d-d7d2036661fe container dapi-container: <nil>
STEP: delete the pod
Aug 26 00:42:51.073: INFO: Waiting for pod var-expansion-f0649adb-b5e4-4927-b42d-d7d2036661fe to disappear
Aug 26 00:42:51.231: INFO: Pod var-expansion-f0649adb-b5e4-4927-b42d-d7d2036661fe no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:51.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9349" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":8,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:51.561: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:51.964: INFO: Driver gcepd doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:51.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-587" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:52.125: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
... skipping 206 lines ...
Aug 26 00:42:47.621: INFO: PersistentVolumeClaim pvc-mbxhp found but phase is Pending instead of Bound.
Aug 26 00:42:49.780: INFO: PersistentVolumeClaim pvc-mbxhp found and phase=Bound (13.124282009s)
Aug 26 00:42:49.780: INFO: Waiting up to 3m0s for PersistentVolume local-6n4sf to have phase Bound
Aug 26 00:42:49.938: INFO: PersistentVolume local-6n4sf found and phase=Bound (157.832667ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-tn65
STEP: Creating a pod to test exec-volume-test
Aug 26 00:42:50.418: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-tn65" in namespace "volume-3283" to be "Succeeded or Failed"
Aug 26 00:42:50.576: INFO: Pod "exec-volume-test-preprovisionedpv-tn65": Phase="Pending", Reason="", readiness=false. Elapsed: 157.984113ms
Aug 26 00:42:52.735: INFO: Pod "exec-volume-test-preprovisionedpv-tn65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31681616s
STEP: Saw pod success
Aug 26 00:42:52.735: INFO: Pod "exec-volume-test-preprovisionedpv-tn65" satisfied condition "Succeeded or Failed"
Aug 26 00:42:52.893: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-tn65 container exec-container-preprovisionedpv-tn65: <nil>
STEP: delete the pod
Aug 26 00:42:53.216: INFO: Waiting for pod exec-volume-test-preprovisionedpv-tn65 to disappear
Aug 26 00:42:53.374: INFO: Pod exec-volume-test-preprovisionedpv-tn65 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-tn65
Aug 26 00:42:53.374: INFO: Deleting pod "exec-volume-test-preprovisionedpv-tn65" in namespace "volume-3283"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":48,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:55.428: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Aug 26 00:42:52.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 26 00:42:53.118: INFO: Waiting up to 5m0s for pod "pod-aa6263cf-dfdf-4d24-89ea-e95509220d89" in namespace "emptydir-3444" to be "Succeeded or Failed"
Aug 26 00:42:53.276: INFO: Pod "pod-aa6263cf-dfdf-4d24-89ea-e95509220d89": Phase="Pending", Reason="", readiness=false. Elapsed: 157.777436ms
Aug 26 00:42:55.438: INFO: Pod "pod-aa6263cf-dfdf-4d24-89ea-e95509220d89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.319629444s
STEP: Saw pod success
Aug 26 00:42:55.438: INFO: Pod "pod-aa6263cf-dfdf-4d24-89ea-e95509220d89" satisfied condition "Succeeded or Failed"
Aug 26 00:42:55.596: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-aa6263cf-dfdf-4d24-89ea-e95509220d89 container test-container: <nil>
STEP: delete the pod
Aug 26 00:42:55.921: INFO: Waiting for pod pod-aa6263cf-dfdf-4d24-89ea-e95509220d89 to disappear
Aug 26 00:42:56.079: INFO: Pod pod-aa6263cf-dfdf-4d24-89ea-e95509220d89 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:56.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3444" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:56.422: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 121 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:265
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:42:53.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:42:57.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-5598" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":6,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:42:58.141: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:42:52.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-5a9fc1b2-7988-4b5f-a6a4-f427861b494c
STEP: Creating a pod to test consume configMaps
Aug 26 00:42:53.840: INFO: Waiting up to 5m0s for pod "pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167" in namespace "configmap-6275" to be "Succeeded or Failed"
Aug 26 00:42:53.998: INFO: Pod "pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167": Phase="Pending", Reason="", readiness=false. Elapsed: 158.327737ms
Aug 26 00:42:56.157: INFO: Pod "pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31716993s
Aug 26 00:42:58.316: INFO: Pod "pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.476532017s
STEP: Saw pod success
Aug 26 00:42:58.316: INFO: Pod "pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167" satisfied condition "Succeeded or Failed"
Aug 26 00:42:58.479: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167 container configmap-volume-test: <nil>
STEP: delete the pod
Aug 26 00:42:58.804: INFO: Waiting for pod pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167 to disappear
Aug 26 00:42:58.962: INFO: Pod pod-configmaps-baa7a220-163a-4dd4-9b17-207673974167 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 61 lines ...
• [SLOW TEST:21.668 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":7,"skipped":85,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:04.055: INFO: Only supported for providers [vsphere] (not aws)
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:06.299: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:07.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-5484" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":8,"skipped":89,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":40,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:43.220 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":7,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:12.158: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 66 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 00:43:08.901: INFO: Waiting up to 5m0s for pod "downwardapi-volume-592d4fbf-a912-4798-8137-0d4b920d9e5a" in namespace "downward-api-5868" to be "Succeeded or Failed"
Aug 26 00:43:09.060: INFO: Pod "downwardapi-volume-592d4fbf-a912-4798-8137-0d4b920d9e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 159.559356ms
Aug 26 00:43:11.221: INFO: Pod "downwardapi-volume-592d4fbf-a912-4798-8137-0d4b920d9e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.319696876s
STEP: Saw pod success
Aug 26 00:43:11.221: INFO: Pod "downwardapi-volume-592d4fbf-a912-4798-8137-0d4b920d9e5a" satisfied condition "Succeeded or Failed"
Aug 26 00:43:11.380: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod downwardapi-volume-592d4fbf-a912-4798-8137-0d4b920d9e5a container client-container: <nil>
STEP: delete the pod
Aug 26 00:43:11.705: INFO: Waiting for pod downwardapi-volume-592d4fbf-a912-4798-8137-0d4b920d9e5a to disappear
Aug 26 00:43:11.875: INFO: Pod downwardapi-volume-592d4fbf-a912-4798-8137-0d4b920d9e5a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 20 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver windows-gcepd doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:174
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":91,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:12.235: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 22 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-c737fbb6-48fd-4f1d-8af1-0f12922dfab9
STEP: Creating a pod to test consume configMaps
Aug 26 00:43:07.421: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1" in namespace "projected-5130" to be "Succeeded or Failed"
Aug 26 00:43:07.576: INFO: Pod "pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 155.000795ms
Aug 26 00:43:09.732: INFO: Pod "pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31059775s
Aug 26 00:43:11.906: INFO: Pod "pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.484444251s
STEP: Saw pod success
Aug 26 00:43:11.906: INFO: Pod "pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1" satisfied condition "Succeeded or Failed"
Aug 26 00:43:12.061: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Aug 26 00:43:12.385: INFO: Waiting for pod pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1 to disappear
Aug 26 00:43:12.540: INFO: Pod pod-projected-configmaps-6405536e-2a3f-493d-91a2-08d07982e9e1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.523 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:13.482: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:14.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6315" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":9,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:14.438: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 178 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:252
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:274
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":4,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:14.948: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 19 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":91,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:42:59.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:434
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":10,"skipped":91,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:16.486: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 137 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
Aug 26 00:43:13.043: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 26 00:43:13.204: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-skqm
STEP: Creating a pod to test subpath
Aug 26 00:43:13.367: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-skqm" in namespace "provisioning-7335" to be "Succeeded or Failed"
Aug 26 00:43:13.526: INFO: Pod "pod-subpath-test-inlinevolume-skqm": Phase="Pending", Reason="", readiness=false. Elapsed: 159.566778ms
Aug 26 00:43:15.686: INFO: Pod "pod-subpath-test-inlinevolume-skqm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319536282s
Aug 26 00:43:17.846: INFO: Pod "pod-subpath-test-inlinevolume-skqm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479671911s
Aug 26 00:43:20.014: INFO: Pod "pod-subpath-test-inlinevolume-skqm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.647651693s
STEP: Saw pod success
Aug 26 00:43:20.014: INFO: Pod "pod-subpath-test-inlinevolume-skqm" satisfied condition "Succeeded or Failed"
Aug 26 00:43:20.174: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-skqm container test-container-subpath-inlinevolume-skqm: <nil>
STEP: delete the pod
Aug 26 00:43:20.499: INFO: Waiting for pod pod-subpath-test-inlinevolume-skqm to disappear
Aug 26 00:43:20.658: INFO: Pod pod-subpath-test-inlinevolume-skqm no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-skqm
Aug 26 00:43:20.658: INFO: Deleting pod "pod-subpath-test-inlinevolume-skqm" in namespace "provisioning-7335"
... skipping 74 lines ...
• [SLOW TEST:25.224 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":6,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:21.731: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 97 lines ...
• [SLOW TEST:8.954 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:457
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":8,"skipped":63,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
Aug 26 00:43:17.435: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 26 00:43:17.435: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zst7
STEP: Creating a pod to test subpath
Aug 26 00:43:17.606: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zst7" in namespace "provisioning-234" to be "Succeeded or Failed"
Aug 26 00:43:17.764: INFO: Pod "pod-subpath-test-inlinevolume-zst7": Phase="Pending", Reason="", readiness=false. Elapsed: 158.275714ms
Aug 26 00:43:19.923: INFO: Pod "pod-subpath-test-inlinevolume-zst7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317416339s
Aug 26 00:43:22.082: INFO: Pod "pod-subpath-test-inlinevolume-zst7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.475841361s
STEP: Saw pod success
Aug 26 00:43:22.082: INFO: Pod "pod-subpath-test-inlinevolume-zst7" satisfied condition "Succeeded or Failed"
Aug 26 00:43:22.240: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-zst7 container test-container-subpath-inlinevolume-zst7: <nil>
STEP: delete the pod
Aug 26 00:43:22.566: INFO: Waiting for pod pod-subpath-test-inlinevolume-zst7 to disappear
Aug 26 00:43:22.724: INFO: Pod pod-subpath-test-inlinevolume-zst7 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zst7
Aug 26 00:43:22.724: INFO: Deleting pod "pod-subpath-test-inlinevolume-zst7" in namespace "provisioning-234"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":11,"skipped":118,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:23.380: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 179 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:23.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2216" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":7,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:23.639: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:24.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6020" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set","total":-1,"completed":9,"skipped":66,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 93 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:426
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:455
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":2,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:28.566: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 20 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:43:23.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:38.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4218" for this suite.


• [SLOW TEST:15.583 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":8,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:39.317: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":5,"skipped":28,"failed":0}
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:43:19.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
• [SLOW TEST:23.135 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Aug 26 00:43:32.387: INFO: PersistentVolumeClaim pvc-shtwt found but phase is Pending instead of Bound.
Aug 26 00:43:34.546: INFO: PersistentVolumeClaim pvc-shtwt found and phase=Bound (15.246719456s)
Aug 26 00:43:34.546: INFO: Waiting up to 3m0s for PersistentVolume local-rz248 to have phase Bound
Aug 26 00:43:34.701: INFO: PersistentVolume local-rz248 found and phase=Bound (155.069569ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dqvs
STEP: Creating a pod to test subpath
Aug 26 00:43:35.169: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dqvs" in namespace "provisioning-276" to be "Succeeded or Failed"
Aug 26 00:43:35.324: INFO: Pod "pod-subpath-test-preprovisionedpv-dqvs": Phase="Pending", Reason="", readiness=false. Elapsed: 155.353844ms
Aug 26 00:43:37.479: INFO: Pod "pod-subpath-test-preprovisionedpv-dqvs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310619338s
Aug 26 00:43:39.636: INFO: Pod "pod-subpath-test-preprovisionedpv-dqvs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.466926241s
STEP: Saw pod success
Aug 26 00:43:39.636: INFO: Pod "pod-subpath-test-preprovisionedpv-dqvs" satisfied condition "Succeeded or Failed"
Aug 26 00:43:39.793: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-dqvs container test-container-subpath-preprovisionedpv-dqvs: <nil>
STEP: delete the pod
Aug 26 00:43:40.111: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dqvs to disappear
Aug 26 00:43:40.267: INFO: Pod pod-subpath-test-preprovisionedpv-dqvs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dqvs
Aug 26 00:43:40.267: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dqvs" in namespace "provisioning-276"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":83,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:42.419: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 51 lines ...
Aug 26 00:43:40.721: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug 26 00:43:40.721: INFO: stdout: "scheduler controller-manager etcd-0 etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Aug 26 00:43:40.721: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9198 get componentstatuses scheduler'
Aug 26 00:43:41.285: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug 26 00:43:41.285: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of controller-manager
Aug 26 00:43:41.285: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9198 get componentstatuses controller-manager'
Aug 26 00:43:41.858: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug 26 00:43:41.858: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-0
Aug 26 00:43:41.858: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9198 get componentstatuses etcd-0'
Aug 26 00:43:42.428: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug 26 00:43:42.428: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of etcd-1
Aug 26 00:43:42.428: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9198 get componentstatuses etcd-1'
Aug 26 00:43:42.993: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Aug 26 00:43:42.993: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:42.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9198" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":9,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:43.332: INFO: Only supported for providers [vsphere] (not aws)
... skipping 114 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":12,"skipped":134,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:43:42.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 26 00:43:43.367: INFO: Waiting up to 5m0s for pod "pod-0b83ee17-d40b-4c90-9ea4-6a04188c4c98" in namespace "emptydir-3435" to be "Succeeded or Failed"
Aug 26 00:43:43.522: INFO: Pod "pod-0b83ee17-d40b-4c90-9ea4-6a04188c4c98": Phase="Pending", Reason="", readiness=false. Elapsed: 154.874771ms
Aug 26 00:43:45.677: INFO: Pod "pod-0b83ee17-d40b-4c90-9ea4-6a04188c4c98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.310314988s
STEP: Saw pod success
Aug 26 00:43:45.677: INFO: Pod "pod-0b83ee17-d40b-4c90-9ea4-6a04188c4c98" satisfied condition "Succeeded or Failed"
Aug 26 00:43:45.832: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-0b83ee17-d40b-4c90-9ea4-6a04188c4c98 container test-container: <nil>
STEP: delete the pod
Aug 26 00:43:46.148: INFO: Waiting for pod pod-0b83ee17-d40b-4c90-9ea4-6a04188c4c98 to disappear
Aug 26 00:43:46.303: INFO: Pod pod-0b83ee17-d40b-4c90-9ea4-6a04188c4c98 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:46.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":87,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 53 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 00:43:44.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96b611e2-c67e-4abb-be2f-f0e398f3b25d" in namespace "projected-3854" to be "Succeeded or Failed"
Aug 26 00:43:44.849: INFO: Pod "downwardapi-volume-96b611e2-c67e-4abb-be2f-f0e398f3b25d": Phase="Pending", Reason="", readiness=false. Elapsed: 158.046598ms
Aug 26 00:43:47.008: INFO: Pod "downwardapi-volume-96b611e2-c67e-4abb-be2f-f0e398f3b25d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316817652s
STEP: Saw pod success
Aug 26 00:43:47.008: INFO: Pod "downwardapi-volume-96b611e2-c67e-4abb-be2f-f0e398f3b25d" satisfied condition "Succeeded or Failed"
Aug 26 00:43:47.167: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod downwardapi-volume-96b611e2-c67e-4abb-be2f-f0e398f3b25d container client-container: <nil>
STEP: delete the pod
Aug 26 00:43:47.492: INFO: Waiting for pod downwardapi-volume-96b611e2-c67e-4abb-be2f-f0e398f3b25d to disappear
Aug 26 00:43:47.650: INFO: Pod downwardapi-volume-96b611e2-c67e-4abb-be2f-f0e398f3b25d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:47.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3854" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":135,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:47.985: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:43:48.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4843" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":12,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:48.681: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Aug 26 00:43:43.274: INFO: PersistentVolumeClaim pvc-7t9wl found and phase=Bound (158.007956ms)
Aug 26 00:43:43.274: INFO: Waiting up to 3m0s for PersistentVolume nfs-xbj6z to have phase Bound
Aug 26 00:43:43.432: INFO: PersistentVolume nfs-xbj6z found and phase=Bound (158.099625ms)
STEP: Checking pod has write access to PersistentVolume
Aug 26 00:43:43.748: INFO: Creating nfs test pod
Aug 26 00:43:43.909: INFO: Pod should terminate with exitcode 0 (success)
Aug 26 00:43:43.909: INFO: Waiting up to 5m0s for pod "pvc-tester-zxwzd" in namespace "pv-721" to be "Succeeded or Failed"
Aug 26 00:43:44.068: INFO: Pod "pvc-tester-zxwzd": Phase="Pending", Reason="", readiness=false. Elapsed: 158.542832ms
Aug 26 00:43:46.226: INFO: Pod "pvc-tester-zxwzd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31695057s
STEP: Saw pod success
Aug 26 00:43:46.226: INFO: Pod "pvc-tester-zxwzd" satisfied condition "Succeeded or Failed"
Aug 26 00:43:46.226: INFO: Pod pvc-tester-zxwzd succeeded 
Aug 26 00:43:46.226: INFO: Deleting pod "pvc-tester-zxwzd" in namespace "pv-721"
Aug 26 00:43:46.390: INFO: Wait up to 5m0s for pod "pvc-tester-zxwzd" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Aug 26 00:43:46.548: INFO: Deleting PVC pvc-7t9wl to trigger reclamation of PV 
Aug 26 00:43:46.548: INFO: Deleting PersistentVolumeClaim "pvc-7t9wl"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":3,"skipped":33,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:43:54.184: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 131 lines ...
Aug 26 00:43:13.020: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4637-aws-scqhxsw
STEP: creating a claim
Aug 26 00:43:13.179: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-f7v8
STEP: Creating a pod to test subpath
Aug 26 00:43:13.659: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-f7v8" in namespace "provisioning-4637" to be "Succeeded or Failed"
Aug 26 00:43:13.818: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 158.753454ms
Aug 26 00:43:15.983: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323653473s
Aug 26 00:43:18.142: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483204328s
Aug 26 00:43:20.302: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.642701063s
Aug 26 00:43:22.461: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.80213598s
Aug 26 00:43:24.621: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.961590578s
... skipping 3 lines ...
Aug 26 00:43:33.259: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.599973242s
Aug 26 00:43:35.422: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.76336964s
Aug 26 00:43:37.583: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.92351872s
Aug 26 00:43:39.742: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.082828116s
Aug 26 00:43:41.901: INFO: Pod "pod-subpath-test-dynamicpv-f7v8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.242083762s
STEP: Saw pod success
Aug 26 00:43:41.901: INFO: Pod "pod-subpath-test-dynamicpv-f7v8" satisfied condition "Succeeded or Failed"
Aug 26 00:43:42.060: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-f7v8 container test-container-volume-dynamicpv-f7v8: <nil>
STEP: delete the pod
Aug 26 00:43:42.393: INFO: Waiting for pod pod-subpath-test-dynamicpv-f7v8 to disappear
Aug 26 00:43:42.552: INFO: Pod pod-subpath-test-dynamicpv-f7v8 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-f7v8
Aug 26 00:43:42.552: INFO: Deleting pod "pod-subpath-test-dynamicpv-f7v8" in namespace "provisioning-4637"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:04.645: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":14,"skipped":142,"failed":0}
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:44:04.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename discovery
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:44:06.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-1691" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":15,"skipped":142,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:06.593: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 85 lines ...
• [SLOW TEST:13.227 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:466
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]","total":-1,"completed":4,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:07.459: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:92
STEP: Creating a pod to test downward API volume plugin
Aug 26 00:44:05.622: INFO: Waiting up to 5m0s for pod "metadata-volume-58636e62-766f-4412-a5c1-10b9cb99c37e" in namespace "projected-1526" to be "Succeeded or Failed"
Aug 26 00:44:05.781: INFO: Pod "metadata-volume-58636e62-766f-4412-a5c1-10b9cb99c37e": Phase="Pending", Reason="", readiness=false. Elapsed: 158.801616ms
Aug 26 00:44:07.941: INFO: Pod "metadata-volume-58636e62-766f-4412-a5c1-10b9cb99c37e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31825026s
STEP: Saw pod success
Aug 26 00:44:07.941: INFO: Pod "metadata-volume-58636e62-766f-4412-a5c1-10b9cb99c37e" satisfied condition "Succeeded or Failed"
Aug 26 00:44:08.105: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod metadata-volume-58636e62-766f-4412-a5c1-10b9cb99c37e container client-container: <nil>
STEP: delete the pod
Aug 26 00:44:08.434: INFO: Waiting for pod metadata-volume-58636e62-766f-4412-a5c1-10b9cb99c37e to disappear
Aug 26 00:44:08.594: INFO: Pod metadata-volume-58636e62-766f-4412-a5c1-10b9cb99c37e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:44:08.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1526" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:08.926: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 543 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:44:10.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-4906" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":16,"skipped":154,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:7.774 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":10,"skipped":128,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:17.157: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376

      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":96,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:43:21.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 97 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:426
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:455
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":11,"skipped":96,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:18.627: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 202 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should support inline execution and attach
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:551
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":13,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:20.613: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
STEP: Creating a kubernetes client
Aug 26 00:42:07.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:58
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:247
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-7049" for this suite.


• [SLOW TEST:134.399 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:247
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":6,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:21.560: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Aug 26 00:44:18.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's args
Aug 26 00:44:19.741: INFO: Waiting up to 5m0s for pod "var-expansion-4dd5fe62-b33a-4fe3-8e89-97631f50a5a7" in namespace "var-expansion-4151" to be "Succeeded or Failed"
Aug 26 00:44:19.900: INFO: Pod "var-expansion-4dd5fe62-b33a-4fe3-8e89-97631f50a5a7": Phase="Pending", Reason="", readiness=false. Elapsed: 159.527633ms
Aug 26 00:44:22.060: INFO: Pod "var-expansion-4dd5fe62-b33a-4fe3-8e89-97631f50a5a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.31958976s
STEP: Saw pod success
Aug 26 00:44:22.060: INFO: Pod "var-expansion-4dd5fe62-b33a-4fe3-8e89-97631f50a5a7" satisfied condition "Succeeded or Failed"
Aug 26 00:44:22.220: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod var-expansion-4dd5fe62-b33a-4fe3-8e89-97631f50a5a7 container dapi-container: <nil>
STEP: delete the pod
Aug 26 00:44:22.546: INFO: Waiting for pod var-expansion-4dd5fe62-b33a-4fe3-8e89-97631f50a5a7 to disappear
Aug 26 00:44:22.705: INFO: Pod var-expansion-4dd5fe62-b33a-4fe3-8e89-97631f50a5a7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 112 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Aug 26 00:44:21.918: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8866 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Aug 26 00:44:23.720: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Aug 26 00:44:23.720: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8866 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Aug 26 00:44:25.485: INFO: rc: 255
Aug 26 00:44:25.485: INFO: got err error running /tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8866 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0826 00:44:25.298373     198 merged_client_builder.go:163] Using in-cluster namespace
I0826 00:44:25.298781     198 merged_client_builder.go:121] Using in-cluster configuration
I0826 00:44:25.301332     198 merged_client_builder.go:121] Using in-cluster configuration
I0826 00:44:25.305359     198 merged_client_builder.go:121] Using in-cluster configuration
I0826 00:44:25.305866     198 round_trippers.go:421] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-8866/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0826 00:44:25.311702     198 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0005ce000, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d04b80, 0xc000000003, 0x0, 0x0, 0xc000286fc0, 0x2ae3c39, 0xa, 0x73, 0x40b300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d04b80, 0x3, 0x0, 0x0, 0x2, 0xc000973ac8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00058d440, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e5e380, 0xc00010c960, 0x1d06eb8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8b5
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0003d98c0, 0xc000473da0, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Aug 26 00:44:25.486: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8866 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Aug 26 00:44:27.179: INFO: rc: 255
Aug 26 00:44:27.180: INFO: got err error running /tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8866 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0826 00:44:27.054010     210 merged_client_builder.go:163] Using in-cluster namespace
I0826 00:44:27.069140     210 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 14 milliseconds
I0826 00:44:27.069262     210 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0826 00:44:27.080302     210 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 10 milliseconds
I0826 00:44:27.080378     210 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0826 00:44:27.080415     210 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0826 00:44:27.082946     210 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0826 00:44:27.083008     210 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0826 00:44:27.085839     210 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0826 00:44:27.085902     210 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0826 00:44:27.088094     210 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0826 00:44:27.088163     210 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0826 00:44:27.088384     210 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F0826 00:44:27.088419     210 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0008e21c0, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d04b80, 0xc000000003, 0x0, 0x0, 0xc0003b23f0, 0x2ae3c39, 0xa, 0x73, 0x40b300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d04b80, 0x3, 0x0, 0x0, 0x2, 0xc000731ac8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0006a0120, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e5d720, 0xc0003a1ec0, 0x1d06eb8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004b1b80, 0xc000901cb0, 0x1, 0x3)
... skipping 24 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Aug 26 00:44:27.180: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8866 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Aug 26 00:44:28.852: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Aug 26 00:44:28.852: INFO: stdout: "I0826 00:44:28.733529     222 merged_client_builder.go:121] Using in-cluster configuration\nI0826 00:44:28.740966     222 merged_client_builder.go:121] Using in-cluster configuration\nI0826 00:44:28.753586     222 merged_client_builder.go:121] Using in-cluster configuration\nI0826 00:44:28.763527     222 round_trippers.go:444] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 9 milliseconds\nNo resources found in invalid namespace.\n"
Aug 26 00:44:28.852: INFO: stdout: I0826 00:44:28.733529     222 merged_client_builder.go:121] Using in-cluster configuration
... skipping 70 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:635
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":10,"skipped":83,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:42.432: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 97 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
Aug 26 00:44:21.411: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Aug 26 00:44:21.411: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-cbqs
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 00:44:21.568: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-cbqs" in namespace "provisioning-7789" to be "Succeeded or Failed"
Aug 26 00:44:21.723: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Pending", Reason="", readiness=false. Elapsed: 154.94952ms
Aug 26 00:44:23.878: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310256633s
Aug 26 00:44:26.034: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 4.465863266s
Aug 26 00:44:28.192: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 6.623430865s
Aug 26 00:44:30.347: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 8.778910088s
Aug 26 00:44:32.503: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 10.934701077s
Aug 26 00:44:34.658: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 13.089986281s
Aug 26 00:44:36.814: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 15.245284788s
Aug 26 00:44:38.969: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 17.400783401s
Aug 26 00:44:41.125: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 19.556441782s
Aug 26 00:44:43.280: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Running", Reason="", readiness=true. Elapsed: 21.711885608s
Aug 26 00:44:45.436: INFO: Pod "pod-subpath-test-inlinevolume-cbqs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.867414728s
STEP: Saw pod success
Aug 26 00:44:45.436: INFO: Pod "pod-subpath-test-inlinevolume-cbqs" satisfied condition "Succeeded or Failed"
Aug 26 00:44:45.591: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-cbqs container test-container-subpath-inlinevolume-cbqs: <nil>
STEP: delete the pod
Aug 26 00:44:45.908: INFO: Waiting for pod pod-subpath-test-inlinevolume-cbqs to disappear
Aug 26 00:44:46.063: INFO: Pod pod-subpath-test-inlinevolume-cbqs no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-cbqs
Aug 26 00:44:46.063: INFO: Deleting pod "pod-subpath-test-inlinevolume-cbqs" in namespace "provisioning-7789"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":14,"skipped":97,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:46.718: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 114 lines ...
Aug 26 00:44:31.858: INFO: PersistentVolumeClaim pvc-6lkrz found but phase is Pending instead of Bound.
Aug 26 00:44:34.017: INFO: PersistentVolumeClaim pvc-6lkrz found and phase=Bound (10.957122764s)
Aug 26 00:44:34.018: INFO: Waiting up to 3m0s for PersistentVolume local-2fk6m to have phase Bound
Aug 26 00:44:34.176: INFO: PersistentVolume local-2fk6m found and phase=Bound (158.80379ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hms2
STEP: Creating a pod to test subpath
Aug 26 00:44:34.654: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hms2" in namespace "provisioning-8860" to be "Succeeded or Failed"
Aug 26 00:44:34.814: INFO: Pod "pod-subpath-test-preprovisionedpv-hms2": Phase="Pending", Reason="", readiness=false. Elapsed: 159.145853ms
Aug 26 00:44:36.973: INFO: Pod "pod-subpath-test-preprovisionedpv-hms2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318523855s
Aug 26 00:44:39.132: INFO: Pod "pod-subpath-test-preprovisionedpv-hms2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.477892502s
STEP: Saw pod success
Aug 26 00:44:39.132: INFO: Pod "pod-subpath-test-preprovisionedpv-hms2" satisfied condition "Succeeded or Failed"
Aug 26 00:44:39.292: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-hms2 container test-container-subpath-preprovisionedpv-hms2: <nil>
STEP: delete the pod
Aug 26 00:44:39.616: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hms2 to disappear
Aug 26 00:44:39.774: INFO: Pod pod-subpath-test-preprovisionedpv-hms2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hms2
Aug 26 00:44:39.775: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hms2" in namespace "provisioning-8860"
STEP: Creating pod pod-subpath-test-preprovisionedpv-hms2
STEP: Creating a pod to test subpath
Aug 26 00:44:40.093: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hms2" in namespace "provisioning-8860" to be "Succeeded or Failed"
Aug 26 00:44:40.252: INFO: Pod "pod-subpath-test-preprovisionedpv-hms2": Phase="Pending", Reason="", readiness=false. Elapsed: 158.971777ms
Aug 26 00:44:42.411: INFO: Pod "pod-subpath-test-preprovisionedpv-hms2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.318485832s
STEP: Saw pod success
Aug 26 00:44:42.412: INFO: Pod "pod-subpath-test-preprovisionedpv-hms2" satisfied condition "Succeeded or Failed"
Aug 26 00:44:42.571: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-hms2 container test-container-subpath-preprovisionedpv-hms2: <nil>
STEP: delete the pod
Aug 26 00:44:42.895: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hms2 to disappear
Aug 26 00:44:43.054: INFO: Pod pod-subpath-test-preprovisionedpv-hms2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hms2
Aug 26 00:44:43.054: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hms2" in namespace "provisioning-8860"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":11,"skipped":130,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:47.395: INFO: Only supported for providers [vsphere] (not aws)
... skipping 72 lines ...
STEP: creating a claim
Aug 26 00:43:54.372: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 26 00:43:54.529: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpath8x8v6] to have phase Bound
Aug 26 00:43:54.686: INFO: PersistentVolumeClaim csi-hostpath8x8v6 found and phase=Bound (156.242518ms)
STEP: Creating pod pod-subpath-test-dynamicpv-swt8
STEP: Creating a pod to test subpath
Aug 26 00:43:55.158: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-swt8" in namespace "provisioning-2185" to be "Succeeded or Failed"
Aug 26 00:43:55.314: INFO: Pod "pod-subpath-test-dynamicpv-swt8": Phase="Pending", Reason="", readiness=false. Elapsed: 156.384552ms
Aug 26 00:43:57.471: INFO: Pod "pod-subpath-test-dynamicpv-swt8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313026623s
Aug 26 00:43:59.628: INFO: Pod "pod-subpath-test-dynamicpv-swt8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469922206s
Aug 26 00:44:01.785: INFO: Pod "pod-subpath-test-dynamicpv-swt8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626841079s
Aug 26 00:44:03.941: INFO: Pod "pod-subpath-test-dynamicpv-swt8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.783380308s
Aug 26 00:44:06.120: INFO: Pod "pod-subpath-test-dynamicpv-swt8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.96190045s
STEP: Saw pod success
Aug 26 00:44:06.120: INFO: Pod "pod-subpath-test-dynamicpv-swt8" satisfied condition "Succeeded or Failed"
Aug 26 00:44:06.287: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-subpath-test-dynamicpv-swt8 container test-container-subpath-dynamicpv-swt8: <nil>
STEP: delete the pod
Aug 26 00:44:06.627: INFO: Waiting for pod pod-subpath-test-dynamicpv-swt8 to disappear
Aug 26 00:44:06.784: INFO: Pod pod-subpath-test-dynamicpv-swt8 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-swt8
Aug 26 00:44:06.784: INFO: Deleting pod "pod-subpath-test-dynamicpv-swt8" in namespace "provisioning-2185"
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:47.875: INFO: Driver gcepd doesn't support ntfs -- skipping
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-lg9n
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 00:44:34.394: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lg9n" in namespace "subpath-7063" to be "Succeeded or Failed"
Aug 26 00:44:34.552: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Pending", Reason="", readiness=false. Elapsed: 157.717382ms
Aug 26 00:44:36.710: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 2.315966391s
Aug 26 00:44:38.868: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 4.474348334s
Aug 26 00:44:41.031: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 6.6370312s
Aug 26 00:44:43.189: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 8.795524656s
Aug 26 00:44:45.348: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 10.953949807s
Aug 26 00:44:47.507: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 13.112692992s
Aug 26 00:44:49.665: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 15.271654907s
Aug 26 00:44:51.824: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 17.429913274s
Aug 26 00:44:53.982: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Running", Reason="", readiness=true. Elapsed: 19.588121603s
Aug 26 00:44:56.140: INFO: Pod "pod-subpath-test-configmap-lg9n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.746056769s
STEP: Saw pod success
Aug 26 00:44:56.140: INFO: Pod "pod-subpath-test-configmap-lg9n" satisfied condition "Succeeded or Failed"
Aug 26 00:44:56.298: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-subpath-test-configmap-lg9n container test-container-subpath-configmap-lg9n: <nil>
STEP: delete the pod
Aug 26 00:44:56.621: INFO: Waiting for pod pod-subpath-test-configmap-lg9n to disappear
Aug 26 00:44:56.779: INFO: Pod pod-subpath-test-configmap-lg9n no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lg9n
Aug 26 00:44:56.779: INFO: Deleting pod "pod-subpath-test-configmap-lg9n" in namespace "subpath-7063"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":86,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:44:57.266: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 63 lines ...
• [SLOW TEST:13.206 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":133,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:44:57.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Aug 26 00:44:58.243: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6aa934ce-383b-48fd-bad4-a992a80cab88" in namespace "security-context-test-3894" to be "Succeeded or Failed"
Aug 26 00:44:58.400: INFO: Pod "busybox-readonly-false-6aa934ce-383b-48fd-bad4-a992a80cab88": Phase="Pending", Reason="", readiness=false. Elapsed: 157.869505ms
Aug 26 00:45:00.559: INFO: Pod "busybox-readonly-false-6aa934ce-383b-48fd-bad4-a992a80cab88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316113187s
Aug 26 00:45:00.559: INFO: Pod "busybox-readonly-false-6aa934ce-383b-48fd-bad4-a992a80cab88" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:45:00.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3894" for this suite.

•
... skipping 22 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":7,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:01.369: INFO: Only supported for providers [gce gke] (not aws)
... skipping 88 lines ...
Aug 26 00:45:00.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
Aug 26 00:45:01.606: INFO: Waiting up to 5m0s for pod "client-containers-83be9e20-981d-40e6-a19c-b53f49d74f35" in namespace "containers-7615" to be "Succeeded or Failed"
Aug 26 00:45:01.765: INFO: Pod "client-containers-83be9e20-981d-40e6-a19c-b53f49d74f35": Phase="Pending", Reason="", readiness=false. Elapsed: 159.025343ms
Aug 26 00:45:03.924: INFO: Pod "client-containers-83be9e20-981d-40e6-a19c-b53f49d74f35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.318406083s
STEP: Saw pod success
Aug 26 00:45:03.924: INFO: Pod "client-containers-83be9e20-981d-40e6-a19c-b53f49d74f35" satisfied condition "Succeeded or Failed"
Aug 26 00:45:04.084: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod client-containers-83be9e20-981d-40e6-a19c-b53f49d74f35 container test-container: <nil>
STEP: delete the pod
Aug 26 00:45:04.409: INFO: Waiting for pod client-containers-83be9e20-981d-40e6-a19c-b53f49d74f35 to disappear
Aug 26 00:45:04.567: INFO: Pod client-containers-83be9e20-981d-40e6-a19c-b53f49d74f35 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:45:04.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7615" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":137,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:04.913: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":8,"skipped":52,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":90,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:45:00.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
• [SLOW TEST:7.852 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":13,"skipped":90,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:129.935 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently","total":-1,"completed":7,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:09.222: INFO: Only supported for providers [gce gke] (not aws)
... skipping 119 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-3af8e248-7513-42ae-9041-275dcacefc77
STEP: Creating a pod to test consume configMaps
Aug 26 00:45:10.386: INFO: Waiting up to 5m0s for pod "pod-configmaps-3716af80-558c-4d2e-9451-3a47889b1996" in namespace "configmap-6087" to be "Succeeded or Failed"
Aug 26 00:45:10.548: INFO: Pod "pod-configmaps-3716af80-558c-4d2e-9451-3a47889b1996": Phase="Pending", Reason="", readiness=false. Elapsed: 161.802905ms
Aug 26 00:45:12.709: INFO: Pod "pod-configmaps-3716af80-558c-4d2e-9451-3a47889b1996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.323048906s
STEP: Saw pod success
Aug 26 00:45:12.709: INFO: Pod "pod-configmaps-3716af80-558c-4d2e-9451-3a47889b1996" satisfied condition "Succeeded or Failed"
Aug 26 00:45:12.870: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-configmaps-3716af80-558c-4d2e-9451-3a47889b1996 container configmap-volume-test: <nil>
STEP: delete the pod
Aug 26 00:45:13.198: INFO: Waiting for pod pod-configmaps-3716af80-558c-4d2e-9451-3a47889b1996 to disappear
Aug 26 00:45:13.358: INFO: Pod pod-configmaps-3716af80-558c-4d2e-9451-3a47889b1996 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:45:13.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6087" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:13.692: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 32 lines ...
• [SLOW TEST:8.569 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":14,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:17.340: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 335 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":14,"skipped":140,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:18.986: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 123 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:21.600: INFO: Only supported for providers [openstack] (not aws)
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":6,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:26.196: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 64 lines ...
Aug 26 00:45:02.189: INFO: PersistentVolumeClaim pvc-sjnqz found but phase is Pending instead of Bound.
Aug 26 00:45:04.344: INFO: PersistentVolumeClaim pvc-sjnqz found and phase=Bound (15.242843111s)
Aug 26 00:45:04.344: INFO: Waiting up to 3m0s for PersistentVolume aws-lwttv to have phase Bound
Aug 26 00:45:04.499: INFO: PersistentVolume aws-lwttv found and phase=Bound (154.975659ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-29cf
STEP: Creating a pod to test exec-volume-test
Aug 26 00:45:04.965: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-29cf" in namespace "volume-708" to be "Succeeded or Failed"
Aug 26 00:45:05.123: INFO: Pod "exec-volume-test-preprovisionedpv-29cf": Phase="Pending", Reason="", readiness=false. Elapsed: 158.493555ms
Aug 26 00:45:07.279: INFO: Pod "exec-volume-test-preprovisionedpv-29cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313700479s
Aug 26 00:45:09.434: INFO: Pod "exec-volume-test-preprovisionedpv-29cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469233681s
Aug 26 00:45:11.590: INFO: Pod "exec-volume-test-preprovisionedpv-29cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.624918429s
Aug 26 00:45:13.745: INFO: Pod "exec-volume-test-preprovisionedpv-29cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.780277309s
Aug 26 00:45:15.901: INFO: Pod "exec-volume-test-preprovisionedpv-29cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.935654255s
STEP: Saw pod success
Aug 26 00:45:15.901: INFO: Pod "exec-volume-test-preprovisionedpv-29cf" satisfied condition "Succeeded or Failed"
Aug 26 00:45:16.056: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-29cf container exec-container-preprovisionedpv-29cf: <nil>
STEP: delete the pod
Aug 26 00:45:16.373: INFO: Waiting for pod exec-volume-test-preprovisionedpv-29cf to disappear
Aug 26 00:45:16.528: INFO: Pod exec-volume-test-preprovisionedpv-29cf no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-29cf
Aug 26 00:45:16.528: INFO: Deleting pod "exec-volume-test-preprovisionedpv-29cf" in namespace "volume-708"
STEP: Deleting pv and pvc
Aug 26 00:45:16.683: INFO: Deleting PersistentVolumeClaim "pvc-sjnqz"
Aug 26 00:45:16.839: INFO: Deleting PersistentVolume "aws-lwttv"
Aug 26 00:45:17.270: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0b92edc33ed309665", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0b92edc33ed309665 is currently attached to i-0bd3dc087ffb17180
	status code: 400, request id: a354ecb5-7c1c-4c06-ba4e-67d907aaff6d
Aug 26 00:45:23.090: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0b92edc33ed309665", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0b92edc33ed309665 is currently attached to i-0bd3dc087ffb17180
	status code: 400, request id: a79784fa-e844-41c4-93df-43d8bc6249be
Aug 26 00:45:28.865: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-0b92edc33ed309665".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:45:28.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-708" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":15,"skipped":114,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-838
STEP: Creating statefulset with conflicting port in namespace statefulset-838
STEP: Waiting until pod test-pod will start running in namespace statefulset-838
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-838
Aug 26 00:45:17.640: INFO: Observed stateful pod in namespace: statefulset-838, name: ss-0, uid: 84ad6cbb-d988-4a1d-bc7d-adee77dc8309, status phase: Pending. Waiting for statefulset controller to delete.
Aug 26 00:45:17.801: INFO: Observed stateful pod in namespace: statefulset-838, name: ss-0, uid: 84ad6cbb-d988-4a1d-bc7d-adee77dc8309, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 00:45:17.801: INFO: Observed stateful pod in namespace: statefulset-838, name: ss-0, uid: 84ad6cbb-d988-4a1d-bc7d-adee77dc8309, status phase: Failed. Waiting for statefulset controller to delete.
Aug 26 00:45:17.804: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-838
STEP: Removing pod with conflicting port in namespace statefulset-838
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-838 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Aug 26 00:45:22.454: INFO: Deleting all statefulset in ns statefulset-838
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:34.246: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":7,"skipped":25,"failed":0}
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:44:40.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
• [SLOW TEST:61.437 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:42.158: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 68 lines ...
• [SLOW TEST:16.171 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":7,"skipped":67,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":16,"skipped":115,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:42.558: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 68 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:48
STEP: Creating a pod to test hostPath mode
Aug 26 00:45:43.429: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2849" to be "Succeeded or Failed"
Aug 26 00:45:43.590: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 160.234926ms
Aug 26 00:45:45.750: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.320939341s
STEP: Saw pod success
Aug 26 00:45:45.750: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 26 00:45:45.911: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Aug 26 00:45:46.237: INFO: Waiting for pod pod-host-path-test to disappear
Aug 26 00:45:46.398: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:45:46.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2849" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:46.732: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 11 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":9,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:45:13.461: INFO: >>> kubeConfig: /root/.kube/config
... skipping 77 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:902
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":9,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:37.089 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":15,"skipped":124,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:54.647: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 41 lines ...
Aug 26 00:45:46.843: INFO: PersistentVolumeClaim pvc-sxk9j found but phase is Pending instead of Bound.
Aug 26 00:45:49.004: INFO: PersistentVolumeClaim pvc-sxk9j found and phase=Bound (8.805392324s)
Aug 26 00:45:49.004: INFO: Waiting up to 3m0s for PersistentVolume local-mz9md to have phase Bound
Aug 26 00:45:49.165: INFO: PersistentVolume local-mz9md found and phase=Bound (160.773457ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-bczz
STEP: Creating a pod to test exec-volume-test
Aug 26 00:45:49.648: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-bczz" in namespace "volume-1771" to be "Succeeded or Failed"
Aug 26 00:45:49.809: INFO: Pod "exec-volume-test-preprovisionedpv-bczz": Phase="Pending", Reason="", readiness=false. Elapsed: 160.935301ms
Aug 26 00:45:51.970: INFO: Pod "exec-volume-test-preprovisionedpv-bczz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.322067567s
STEP: Saw pod success
Aug 26 00:45:51.970: INFO: Pod "exec-volume-test-preprovisionedpv-bczz" satisfied condition "Succeeded or Failed"
Aug 26 00:45:52.131: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-bczz container exec-container-preprovisionedpv-bczz: <nil>
STEP: delete the pod
Aug 26 00:45:52.457: INFO: Waiting for pod exec-volume-test-preprovisionedpv-bczz to disappear
Aug 26 00:45:52.618: INFO: Pod exec-volume-test-preprovisionedpv-bczz no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-bczz
Aug 26 00:45:52.618: INFO: Deleting pod "exec-volume-test-preprovisionedpv-bczz" in namespace "volume-1771"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":65,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:45:56.879: INFO: Only supported for providers [openstack] (not aws)
... skipping 93 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-367bc9fe-d836-4825-968c-ea1a4ceed312
STEP: Creating a pod to test consume configMaps
Aug 26 00:45:58.137: INFO: Waiting up to 5m0s for pod "pod-configmaps-107f1818-a50b-4096-8804-e2b54fb7153c" in namespace "configmap-8430" to be "Succeeded or Failed"
Aug 26 00:45:58.298: INFO: Pod "pod-configmaps-107f1818-a50b-4096-8804-e2b54fb7153c": Phase="Pending", Reason="", readiness=false. Elapsed: 160.730388ms
Aug 26 00:46:00.462: INFO: Pod "pod-configmaps-107f1818-a50b-4096-8804-e2b54fb7153c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.324206224s
STEP: Saw pod success
Aug 26 00:46:00.462: INFO: Pod "pod-configmaps-107f1818-a50b-4096-8804-e2b54fb7153c" satisfied condition "Succeeded or Failed"
Aug 26 00:46:00.622: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-configmaps-107f1818-a50b-4096-8804-e2b54fb7153c container configmap-volume-test: <nil>
STEP: delete the pod
Aug 26 00:46:00.950: INFO: Waiting for pod pod-configmaps-107f1818-a50b-4096-8804-e2b54fb7153c to disappear
Aug 26 00:46:01.111: INFO: Pod pod-configmaps-107f1818-a50b-4096-8804-e2b54fb7153c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:01.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8430" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":11,"skipped":96,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:46:01.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 26 00:46:02.425: INFO: Waiting up to 5m0s for pod "pod-c6a2a56f-5d69-4242-882b-e97adca67546" in namespace "emptydir-7786" to be "Succeeded or Failed"
Aug 26 00:46:02.586: INFO: Pod "pod-c6a2a56f-5d69-4242-882b-e97adca67546": Phase="Pending", Reason="", readiness=false. Elapsed: 160.56197ms
Aug 26 00:46:04.747: INFO: Pod "pod-c6a2a56f-5d69-4242-882b-e97adca67546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321603339s
STEP: Saw pod success
Aug 26 00:46:04.747: INFO: Pod "pod-c6a2a56f-5d69-4242-882b-e97adca67546" satisfied condition "Succeeded or Failed"
Aug 26 00:46:04.908: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-c6a2a56f-5d69-4242-882b-e97adca67546 container test-container: <nil>
STEP: delete the pod
Aug 26 00:46:05.236: INFO: Waiting for pod pod-c6a2a56f-5d69-4242-882b-e97adca67546 to disappear
Aug 26 00:46:05.396: INFO: Pod pod-c6a2a56f-5d69-4242-882b-e97adca67546 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:05.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7786" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Aug 26 00:46:02.558: INFO: PersistentVolumeClaim pvc-xjx6l found but phase is Pending instead of Bound.
Aug 26 00:46:04.718: INFO: PersistentVolumeClaim pvc-xjx6l found and phase=Bound (13.124868847s)
Aug 26 00:46:04.718: INFO: Waiting up to 3m0s for PersistentVolume local-q6gwf to have phase Bound
Aug 26 00:46:04.879: INFO: PersistentVolume local-q6gwf found and phase=Bound (160.335283ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wh5w
STEP: Creating a pod to test subpath
Aug 26 00:46:05.361: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wh5w" in namespace "provisioning-631" to be "Succeeded or Failed"
Aug 26 00:46:05.529: INFO: Pod "pod-subpath-test-preprovisionedpv-wh5w": Phase="Pending", Reason="", readiness=false. Elapsed: 168.336843ms
Aug 26 00:46:07.690: INFO: Pod "pod-subpath-test-preprovisionedpv-wh5w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.329312266s
Aug 26 00:46:09.852: INFO: Pod "pod-subpath-test-preprovisionedpv-wh5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.491041647s
STEP: Saw pod success
Aug 26 00:46:09.852: INFO: Pod "pod-subpath-test-preprovisionedpv-wh5w" satisfied condition "Succeeded or Failed"
Aug 26 00:46:10.012: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-wh5w container test-container-subpath-preprovisionedpv-wh5w: <nil>
STEP: delete the pod
Aug 26 00:46:10.423: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wh5w to disappear
Aug 26 00:46:10.583: INFO: Pod pod-subpath-test-preprovisionedpv-wh5w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wh5w
Aug 26 00:46:10.583: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wh5w" in namespace "provisioning-631"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":75,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":10,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:45:48.054: INFO: >>> kubeConfig: /root/.kube/config
... skipping 20 lines ...
Aug 26 00:46:01.868: INFO: PersistentVolumeClaim pvc-tctzp found but phase is Pending instead of Bound.
Aug 26 00:46:04.028: INFO: PersistentVolumeClaim pvc-tctzp found and phase=Bound (8.802269477s)
Aug 26 00:46:04.028: INFO: Waiting up to 3m0s for PersistentVolume local-v8vpv to have phase Bound
Aug 26 00:46:04.188: INFO: PersistentVolume local-v8vpv found and phase=Bound (160.134046ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lhhp
STEP: Creating a pod to test subpath
Aug 26 00:46:04.670: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lhhp" in namespace "provisioning-4298" to be "Succeeded or Failed"
Aug 26 00:46:04.830: INFO: Pod "pod-subpath-test-preprovisionedpv-lhhp": Phase="Pending", Reason="", readiness=false. Elapsed: 160.209885ms
Aug 26 00:46:06.991: INFO: Pod "pod-subpath-test-preprovisionedpv-lhhp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320834922s
Aug 26 00:46:09.151: INFO: Pod "pod-subpath-test-preprovisionedpv-lhhp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.481270873s
STEP: Saw pod success
Aug 26 00:46:09.151: INFO: Pod "pod-subpath-test-preprovisionedpv-lhhp" satisfied condition "Succeeded or Failed"
Aug 26 00:46:09.311: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-lhhp container test-container-subpath-preprovisionedpv-lhhp: <nil>
STEP: delete the pod
Aug 26 00:46:09.684: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lhhp to disappear
Aug 26 00:46:09.844: INFO: Pod pod-subpath-test-preprovisionedpv-lhhp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lhhp
Aug 26 00:46:09.844: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lhhp" in namespace "provisioning-4298"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":11,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:15.175: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:14.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4890" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server","total":-1,"completed":10,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:19.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1657" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":11,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:19.516: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 79 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:163
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":5,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:20.673: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:235
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":17,"skipped":129,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:23.320: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 226 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:24.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-784" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":6,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 10 lines ...
Aug 26 00:40:59.339: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:ap-northeast-2a]
Aug 26 00:40:59.339: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Aug 26 00:40:59.339: INFO: Creating storage class object and pvc object for driver - sc: &StorageClass{ObjectMeta:{topology-2299-aws-sc8l78z      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{{[{failure-domain.beta.kubernetes.io/zone [ap-northeast-2a]}]},},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- topology-2299    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{5368709120 0} {<nil>} 5Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*topology-2299-aws-sc8l78z,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating sc
STEP: Creating pvc
STEP: Creating pod
Aug 26 00:46:00.296: FAIL: Unexpected error:
    <*errors.errorString | 0xc0001f4200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 25 lines ...
STEP: Found 9 events.
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:40:59 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {default-scheduler } FailedScheduling: 0/5 nodes are available: 5 pod has unbound immediate PersistentVolumeClaims.
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:05 +0000 UTC - event for pvc-t29x7: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-ac11b8d3-c0e8-48e5-9675-9ae1d7a401d8 using kubernetes.io/aws-ebs
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:07 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {default-scheduler } Scheduled: Successfully assigned topology-2299/pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36 to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:10 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-ac11b8d3-c0e8-48e5-9675-9ae1d7a401d8" 
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:12 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:29 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:29 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:29 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:19.218: INFO: At 2021-08-26 00:41:29 +0000 UTC - event for pod-3fd3b8c8-ee51-43be-928e-659b83ef0a36: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:46:19.381: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 00:46:19.381: INFO: 
Aug 26 00:46:19.541: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 00:46:19.700: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 9870 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 26 00:46:19.701: INFO: 
... skipping 222 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should provision a volume and schedule a pod with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164

      Aug 26 00:46:00.296: Unexpected error:
          <*errors.errorString | 0xc0001f4200>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:180
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":0,"skipped":4,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:25.727: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:26.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6280" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":155,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:27.211: INFO: Driver gcepd doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 46 lines ...
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0826 00:41:26.660318    4910 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Aug 26 00:46:26.977: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:26.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5220" for this suite.


• [SLOW TEST:303.136 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:27.319: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 115 lines ...
Aug 26 00:46:27.237: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 26 00:46:30.800: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:31.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6672" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":158,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:31.478: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Aug 26 00:41:18.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Aug 26 00:41:19.466: INFO: Waiting up to 5m0s for pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8" in namespace "downward-api-276" to be "Succeeded or Failed"
Aug 26 00:41:19.621: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 154.79129ms
Aug 26 00:41:21.776: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310128076s
Aug 26 00:41:23.931: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465387439s
Aug 26 00:41:26.086: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.620251418s
Aug 26 00:41:28.241: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775187927s
Aug 26 00:41:30.396: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.930228159s
... skipping 128 lines ...
Aug 26 00:46:08.473: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.007415661s
Aug 26 00:46:10.628: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.162411977s
Aug 26 00:46:12.783: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.317600616s
Aug 26 00:46:14.939: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.472777794s
Aug 26 00:46:17.094: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.627999205s
Aug 26 00:46:19.249: INFO: Pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.783134864s
Aug 26 00:46:21.571: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8" container "dapi-container": the server rejected our request for an unknown reason (get pods downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8)
STEP: delete the pod
Aug 26 00:46:21.727: INFO: Waiting for pod downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8 to disappear
Aug 26 00:46:21.881: INFO: Pod downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8 still exists
Aug 26 00:46:23.882: INFO: Waiting for pod downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8 to disappear
Aug 26 00:46:24.037: INFO: Pod downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8 still exists
Aug 26 00:46:25.882: INFO: Waiting for pod downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8 to disappear
Aug 26 00:46:26.037: INFO: Pod downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8 no longer exists
Aug 26 00:46:26.037: FAIL: Unexpected error:
    <*errors.errorString | 0xc002562fd0>: {
        s: "expected pod \"downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8\" success: Gave up after waiting 5m0s for pod \"downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8\" to be \"Succeeded or Failed\"",
    }
    expected pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8" success: Gave up after waiting 5m0s for pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000124c60, 0x4c32768, 0x15, 0xc003949400, 0x0, 0xc001251188, 0x2, 0x2, 0x4df0118)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...)
... skipping 13 lines ...
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "downward-api-276".
STEP: Found 6 events.
Aug 26 00:46:26.195: INFO: At 2021-08-26 00:41:19 +0000 UTC - event for downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8: {default-scheduler } Scheduled: Successfully assigned downward-api-276/downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8 to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:26.195: INFO: At 2021-08-26 00:41:19 +0000 UTC - event for downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:26.195: INFO: At 2021-08-26 00:42:37 +0000 UTC - event for downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:26.195: INFO: At 2021-08-26 00:42:37 +0000 UTC - event for downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:26.195: INFO: At 2021-08-26 00:42:38 +0000 UTC - event for downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:26.195: INFO: At 2021-08-26 00:42:38 +0000 UTC - event for downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:46:26.350: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 00:46:26.350: INFO: 
Aug 26 00:46:26.512: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 00:46:26.667: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 9870 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 26 00:46:26.667: INFO: 
... skipping 254 lines ...
• Failure [313.681 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597

  Aug 26 00:46:26.037: Unexpected error:
      <*errors.errorString | 0xc002562fd0>: {
          s: "expected pod \"downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8\" success: Gave up after waiting 5m0s for pod \"downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8\" to be \"Succeeded or Failed\"",
      }
      expected pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8" success: Gave up after waiting 5m0s for pod "downward-api-8e3dac50-6cb1-4689-bc62-625f0cd14ab8" to be "Succeeded or Failed"
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":1,"failures":["[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:32.241: INFO: Only supported for providers [vsphere] (not aws)
... skipping 271 lines ...
Aug 26 00:46:21.524: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Aug 26 00:46:21.524: INFO: Waiting for all frontend pods to be Running.
Aug 26 00:46:21.725: INFO: Waiting for frontend to serve content.
Aug 26 00:46:23.023: INFO: Trying to add a new entry to the guestbook.
Aug 26 00:46:23.187: INFO: Verifying that added entry can be retrieved.
Aug 26 00:46:23.354: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Aug 26 00:46:28.517: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3246 delete --grace-period=0 --force -f -'
Aug 26 00:46:29.276: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 26 00:46:29.276: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Aug 26 00:46:29.276: INFO: Running '/tmp/kubectl736812468/kubectl --server=https://api.e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-3246 delete --grace-period=0 --force -f -'
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":12,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:33.385: INFO: Driver local doesn't support ext3 -- skipping
... skipping 49 lines ...
• [SLOW TEST:9.177 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:33.655: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 108 lines ...
Aug 26 00:41:15.984: INFO: PersistentVolumeClaim pvc-zbnz4 found but phase is Pending instead of Bound.
Aug 26 00:41:18.146: INFO: PersistentVolumeClaim pvc-zbnz4 found and phase=Bound (8.812616428s)
Aug 26 00:41:18.146: INFO: Waiting up to 3m0s for PersistentVolume local-xjsjc to have phase Bound
Aug 26 00:41:18.308: INFO: PersistentVolume local-xjsjc found and phase=Bound (161.440691ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-w8wz
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 00:41:18.795: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w8wz" in namespace "provisioning-4083" to be "Succeeded or Failed"
Aug 26 00:41:18.956: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 161.522743ms
Aug 26 00:41:21.118: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323431809s
Aug 26 00:41:23.281: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485618486s
Aug 26 00:41:25.443: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.647716163s
Aug 26 00:41:27.604: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.8095235s
Aug 26 00:41:29.766: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.971409193s
... skipping 127 lines ...
Aug 26 00:46:06.625: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.829927047s
Aug 26 00:46:08.787: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.991857779s
Aug 26 00:46:10.949: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.153721194s
Aug 26 00:46:13.111: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.315631492s
Aug 26 00:46:15.273: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.477606607s
Aug 26 00:46:17.435: INFO: Pod "pod-subpath-test-preprovisionedpv-w8wz": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.639833713s
Aug 26 00:46:19.766: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-w8wz" container "init-volume-preprovisionedpv-w8wz": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-w8wz)
Aug 26 00:46:19.929: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-w8wz" container "test-container-subpath-preprovisionedpv-w8wz": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-w8wz)
STEP: delete the pod
Aug 26 00:46:20.091: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w8wz to disappear
Aug 26 00:46:20.253: INFO: Pod pod-subpath-test-preprovisionedpv-w8wz still exists
Aug 26 00:46:22.253: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w8wz to disappear
Aug 26 00:46:22.416: INFO: Pod pod-subpath-test-preprovisionedpv-w8wz still exists
Aug 26 00:46:24.253: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w8wz to disappear
Aug 26 00:46:24.415: INFO: Pod pod-subpath-test-preprovisionedpv-w8wz no longer exists
Aug 26 00:46:24.416: FAIL: Unexpected error:
    <*errors.errorString | 0xc00212fc60>: {
        s: "expected pod \"pod-subpath-test-preprovisionedpv-w8wz\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-preprovisionedpv-w8wz\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-preprovisionedpv-w8wz" success: Gave up after waiting 5m0s for pod "pod-subpath-test-preprovisionedpv-w8wz" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc002451600, 0x4c3205a, 0x15, 0xc0031de000, 0x0, 0xc00333f120, 0x1, 0x1, 0x4df0110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 33 lines ...
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:41:05 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-6f2f5: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.20" in 4.190862119s
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:41:06 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-6f2f5: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Created: Created container agnhost-container
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:41:06 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-6f2f5: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Started: Started container agnhost-container
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:41:09 +0000 UTC - event for pvc-zbnz4: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "provisioning-4083" not found
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for pod-subpath-test-preprovisionedpv-w8wz: {default-scheduler } Scheduled: Successfully assigned provisioning-4083/pod-subpath-test-preprovisionedpv-w8wz to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:41:19 +0000 UTC - event for pod-subpath-test-preprovisionedpv-w8wz: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:42:26 +0000 UTC - event for pod-subpath-test-preprovisionedpv-w8wz: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:42:26 +0000 UTC - event for pod-subpath-test-preprovisionedpv-w8wz: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:42:27 +0000 UTC - event for pod-subpath-test-preprovisionedpv-w8wz: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:42:27 +0000 UTC - event for pod-subpath-test-preprovisionedpv-w8wz: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:46:27.383: INFO: At 2021-08-26 00:46:27 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-6f2f5: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Killing: Stopping container agnhost-container
Aug 26 00:46:27.545: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 00:46:27.545: INFO: 
Aug 26 00:46:27.708: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 00:46:27.870: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 9870 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 255 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support file as subpath [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227

      Aug 26 00:46:24.416: Unexpected error:
          <*errors.errorString | 0xc00212fc60>: {
              s: "expected pod \"pod-subpath-test-preprovisionedpv-w8wz\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-preprovisionedpv-w8wz\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-preprovisionedpv-w8wz" success: Gave up after waiting 5m0s for pod "pod-subpath-test-preprovisionedpv-w8wz" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":0,"skipped":26,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:33.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":13,"skipped":65,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:33.911: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 117 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":13,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] EndpointSliceMirroring
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete","total":-1,"completed":3,"skipped":22,"failed":1,"failures":["[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:34.533: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 50 lines ...
STEP: starting aws-injector
STEP: Deleting pod aws-injector in namespace volume-560
Aug 26 00:46:15.048: INFO: Waiting for pod aws-injector to disappear
Aug 26 00:46:15.206: INFO: Pod aws-injector still exists
Aug 26 00:46:17.206: INFO: Waiting for pod aws-injector to disappear
Aug 26 00:46:17.364: INFO: Pod aws-injector no longer exists
Aug 26 00:46:17.365: FAIL: Failed to create injector pod: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework/volume.InjectContent(0xc002621ce0, 0xc002875ed0, 0xa, 0x4befa85, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:539 +0x97e
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:182 +0x45f
... skipping 3 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b
testing.tRunner(0xc0037a0300, 0x4dec428)
	/usr/local/go/src/testing/testing.go:1123 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1168 +0x2b3
STEP: cleaning the environment after aws
Aug 26 00:46:18.188: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-07d293b0a83a656a5", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-07d293b0a83a656a5 is currently attached to i-039ba835ddc4f059b
	status code: 400, request id: 7655f104-dcaa-4402-ab8c-1763386da43b
Aug 26 00:46:24.027: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-07d293b0a83a656a5", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-07d293b0a83a656a5 is currently attached to i-039ba835ddc4f059b
	status code: 400, request id: ef96ae05-5a5e-491d-99be-3f22c065a143
Aug 26 00:46:29.819: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-07d293b0a83a656a5".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "volume-560".
STEP: Found 8 events.
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:41:14 +0000 UTC - event for aws-injector: {default-scheduler } Scheduled: Successfully assigned volume-560/aws-injector to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:41:14 +0000 UTC - event for aws-injector: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "aws-volume-0" : "error attaching EBS volume \"vol-07d293b0a83a656a5\"" to instance "i-039ba835ddc4f059b" since volume is in "creating" state
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for aws-injector: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "aws-volume-0" 
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:41:23 +0000 UTC - event for aws-injector: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:42:49 +0000 UTC - event for aws-injector: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:42:49 +0000 UTC - event for aws-injector: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:42:50 +0000 UTC - event for aws-injector: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:29.979: INFO: At 2021-08-26 00:42:50 +0000 UTC - event for aws-injector: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:46:30.137: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 00:46:30.137: INFO: 
Aug 26 00:46:30.302: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 00:46:30.461: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 9870 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 26 00:46:30.461: INFO: 
... skipping 239 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Aug 26 00:46:17.365: Failed to create injector pod: timed out waiting for the condition

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:539
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":5,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data"]}

S
------------------------------
[BeforeEach] [sig-storage] Flexvolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 118 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277
Aug 26 00:46:34.756: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-c90b31ee-b400-45b0-a59f-5b7b573f9769" in namespace "security-context-test-6532" to be "Succeeded or Failed"
Aug 26 00:46:34.918: INFO: Pod "busybox-privileged-true-c90b31ee-b400-45b0-a59f-5b7b573f9769": Phase="Pending", Reason="", readiness=false. Elapsed: 161.466073ms
Aug 26 00:46:37.080: INFO: Pod "busybox-privileged-true-c90b31ee-b400-45b0-a59f-5b7b573f9769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32356528s
Aug 26 00:46:39.242: INFO: Pod "busybox-privileged-true-c90b31ee-b400-45b0-a59f-5b7b573f9769": Phase="Pending", Reason="", readiness=false. Elapsed: 4.485620273s
Aug 26 00:46:41.404: INFO: Pod "busybox-privileged-true-c90b31ee-b400-45b0-a59f-5b7b573f9769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.64764812s
Aug 26 00:46:41.404: INFO: Pod "busybox-privileged-true-c90b31ee-b400-45b0-a59f-5b7b573f9769" satisfied condition "Succeeded or Failed"
Aug 26 00:46:41.567: INFO: Got logs for pod "busybox-privileged-true-c90b31ee-b400-45b0-a59f-5b7b573f9769": ""
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:41.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6532" for this suite.

... skipping 13 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Aug 26 00:46:35.487: INFO: Waiting up to 5m0s for pod "busybox-user-65534-7ed5fa8e-febe-4407-8b25-3b40c32d42a4" in namespace "security-context-test-9795" to be "Succeeded or Failed"
Aug 26 00:46:35.642: INFO: Pod "busybox-user-65534-7ed5fa8e-febe-4407-8b25-3b40c32d42a4": Phase="Pending", Reason="", readiness=false. Elapsed: 154.662767ms
Aug 26 00:46:37.797: INFO: Pod "busybox-user-65534-7ed5fa8e-febe-4407-8b25-3b40c32d42a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310097371s
Aug 26 00:46:39.952: INFO: Pod "busybox-user-65534-7ed5fa8e-febe-4407-8b25-3b40c32d42a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.465316939s
Aug 26 00:46:42.108: INFO: Pod "busybox-user-65534-7ed5fa8e-febe-4407-8b25-3b40c32d42a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.620465258s
Aug 26 00:46:42.108: INFO: Pod "busybox-user-65534-7ed5fa8e-febe-4407-8b25-3b40c32d42a4" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 00:46:42.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9795" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":24,"failed":1,"failures":["[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:46:34.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-9c6089e6-5142-4c4e-9281-0eadfc6f8949
STEP: Creating a pod to test consume secrets
Aug 26 00:46:35.665: INFO: Waiting up to 5m0s for pod "pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9" in namespace "secrets-4112" to be "Succeeded or Failed"
Aug 26 00:46:35.825: INFO: Pod "pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 160.554196ms
Aug 26 00:46:37.989: INFO: Pod "pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323720261s
Aug 26 00:46:40.150: INFO: Pod "pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484679538s
Aug 26 00:46:42.311: INFO: Pod "pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.645607663s
STEP: Saw pod success
Aug 26 00:46:42.311: INFO: Pod "pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9" satisfied condition "Succeeded or Failed"
Aug 26 00:46:42.474: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9 container secret-env-test: <nil>
STEP: delete the pod
Aug 26 00:46:42.802: INFO: Waiting for pod pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9 to disappear
Aug 26 00:46:42.963: INFO: Pod pod-secrets-167e8c3e-c930-4296-ab59-e10a2424dbf9 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.756 seconds]
[sig-api-machinery] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":101,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 39 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:33.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:103
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Aug 26 00:41:34.943: INFO: Waiting up to 5m0s for pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0" in namespace "security-context-7746" to be "Succeeded or Failed"
Aug 26 00:41:35.102: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 158.459071ms
Aug 26 00:41:37.261: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317363431s
Aug 26 00:41:39.419: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47613697s
Aug 26 00:41:41.578: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.635316436s
Aug 26 00:41:43.737: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.794006008s
Aug 26 00:41:45.899: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.956049204s
... skipping 127 lines ...
Aug 26 00:46:22.431: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.487880428s
Aug 26 00:46:24.592: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.648966618s
Aug 26 00:46:26.753: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.810050226s
Aug 26 00:46:28.914: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.97108926s
Aug 26 00:46:31.078: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.135304702s
Aug 26 00:46:33.242: INFO: Pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.299302881s
Aug 26 00:46:35.565: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0" container "test-container": the server rejected our request for an unknown reason (get pods security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0)
STEP: delete the pod
Aug 26 00:46:35.727: INFO: Waiting for pod security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0 to disappear
Aug 26 00:46:35.888: INFO: Pod security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0 still exists
Aug 26 00:46:37.888: INFO: Waiting for pod security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0 to disappear
Aug 26 00:46:38.049: INFO: Pod security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0 still exists
Aug 26 00:46:39.888: INFO: Waiting for pod security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0 to disappear
Aug 26 00:46:40.049: INFO: Pod security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0 no longer exists
Aug 26 00:46:40.049: FAIL: Unexpected error:
    <*errors.errorString | 0xc00233cb10>: {
        s: "expected pod \"security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0\" success: Gave up after waiting 5m0s for pod \"security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0\" to be \"Succeeded or Failed\"",
    }
    expected pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0" success: Gave up after waiting 5m0s for pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0010598c0, 0x4c87b07, 0x22, 0xc003779800, 0x0, 0xc0011b71b8, 0x2, 0x2, 0x4df0110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 11 lines ...
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "security-context-7746".
STEP: Found 6 events.
Aug 26 00:46:40.211: INFO: At 2021-08-26 00:41:34 +0000 UTC - event for security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0: {default-scheduler } Scheduled: Successfully assigned security-context-7746/security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0 to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:40.211: INFO: At 2021-08-26 00:41:35 +0000 UTC - event for security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:40.211: INFO: At 2021-08-26 00:43:23 +0000 UTC - event for security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:40.211: INFO: At 2021-08-26 00:43:23 +0000 UTC - event for security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:40.211: INFO: At 2021-08-26 00:43:24 +0000 UTC - event for security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 00:46:40.211: INFO: At 2021-08-26 00:43:24 +0000 UTC - event for security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:46:40.371: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 00:46:40.371: INFO: 
Aug 26 00:46:40.534: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 00:46:40.695: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 9870 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 00:46:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 26 00:46:40.695: INFO: 
... skipping 218 lines ...
• Failure [312.364 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should support container.SecurityContext.RunAsUser [LinuxOnly] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:103

  Aug 26 00:46:40.049: Unexpected error:
      <*errors.errorString | 0xc00233cb10>: {
          s: "expected pod \"security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0\" success: Gave up after waiting 5m0s for pod \"security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0\" to be \"Succeeded or Failed\"",
      }
      expected pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0" success: Gave up after waiting 5m0s for pod "security-context-ade516fd-6b58-42d3-ba43-459eb97ba0a0" to be "Succeeded or Failed"
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":8,"failed":1,"failures":["[k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 00:46:46.387: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 14 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:832
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":22,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:41:15.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 210 lines ...
Aug 26 00:46:34.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:"ProgressDeadlineExceeded", Message:"ReplicaSet \"webserver-dd94f59b7\" has timed out progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 00:46:36.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:"ProgressDeadlineExceeded", Message:"ReplicaSet \"webserver-dd94f59b7\" has timed out progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 00:46:38.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:"ProgressDeadlineExceeded", Message:"ReplicaSet \"webserver-dd94f59b7\" has timed out progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 00:46:40.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:"ProgressDeadlineExceeded", Message:"ReplicaSet \"webserver-dd94f59b7\" has timed out progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 00:46:42.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:"ProgressDeadlineExceeded", Message:"ReplicaSet \"webserver-dd94f59b7\" has timed out progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 00:46:42.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:"ProgressDeadlineExceeded", Message:"ReplicaSet \"webserver-dd94f59b7\" has timed out progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 00:46:42.738: FAIL: Unexpected error:
    <*errors.errorString | 0xc0031974d0>: {
        s: "error waiting for deployment \"webserver\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"ReplicaSet \\\"webserver-dd94f59b7\\\" has timed out progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "webserver" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:21, Replicas:6, UpdatedReplicas:6, ReadyReplicas:4, AvailableReplicas:4, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535276, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765535332, loc:(*time.Location)(0x7718ac0)}}, Reason:"ProgressDeadlineExceeded", Message:"ReplicaSet \"webserver-dd94f59b7\" has timed out progressing."}}, CollisionCount:(*int32)(nil)}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.testIterativeDeployments(0xc0003c7600)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:648 +0x184f
k8s.io/kubernetes/test/e2e/apps.glob..func4.8()
... skipping 45 lines ...
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:16 +0000 UTC - event for webserver-dd94f59b7-mbsm7: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-mbsm7 to ip-172-20-60-101.ap-northeast-2.compute.internal
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:16 +0000 UTC - event for webserver-dd94f59b7-wtgj8: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-wtgj8 to ip-172-20-62-163.ap-northeast-2.compute.internal
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver: {deployment-controller } ScalingReplicaSet: Scaled up replica set webserver-dd94f59b7 to 7
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-dd94f59b7-lwwdc
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-dd94f59b7-fnjl7
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-28w2z: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-6btxw: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} FailedMount: MountVolume.SetUp failed for volume "default-token-dvxhl" : failed to sync secret cache: timed out waiting for the condition
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-fnjl7: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-fnjl7 to ip-172-20-62-163.ap-northeast-2.compute.internal
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-hm2f6: {kubelet ip-172-20-62-163.ap-northeast-2.compute.internal} FailedMount: MountVolume.SetUp failed for volume "default-token-dvxhl" : failed to sync secret cache: timed out waiting for the condition
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-lnc7q: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-lwwdc: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-lwwdc to ip-172-20-61-11.ap-northeast-2.compute.internal
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-mbsm7: {kubelet ip-172-20-60-101.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:17 +0000 UTC - event for webserver-dd94f59b7-wtgj8: {kubelet ip-172-20-62-163.ap-northeast-2.compute.internal} FailedMount: MountVolume.SetUp failed for volume "default-token-dvxhl" : failed to sync secret cache: timed out waiting for the condition
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for webserver: {deployment-controller } DeploymentRollbackRevisionNotFound: Unable to find last revision.
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for webserver: {deployment-controller } ScalingReplicaSet: Scaled up replica set webserver-dd94f59b7 to 8
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for webserver-dd94f59b7: {replicaset-controller } SuccessfulCreate: Created pod: webserver-dd94f59b7-vjcrw
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for webserver-dd94f59b7-6btxw: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for webserver-dd94f59b7-lwwdc: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.533: INFO: At 2021-08-26 00:41:18 +0000 UTC - event for webserver-dd94f59b7-vjcrw: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-vjcrw to ip-172-20-62-60.ap-northeast-2.compute.internal
... skipping 38 lines ...
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-2x49w: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Started: Started container httpd
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-2x49w: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Created: Created container httpd
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-4vdcb: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-4vdcb to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-4vdcb: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-59cld: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-59cld to ip-172-20-62-163.ap-northeast-2.compute.internal
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-6btxw: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 20.677601985s
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-6btxw: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Failed: Error: cannot find volume "default-token-dvxhl" to mount into container "httpd"
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-992cd: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-992cd to ip-172-20-61-11.ap-northeast-2.compute.internal
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-mbsm7: {kubelet ip-172-20-60-101.ap-northeast-2.compute.internal} Killing: Stopping container httpd
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-nml27: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-nml27 to ip-172-20-62-163.ap-northeast-2.compute.internal
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-nml27: {kubelet ip-172-20-62-163.ap-northeast-2.compute.internal} Created: Created container httpd
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-nml27: {kubelet ip-172-20-62-163.ap-northeast-2.compute.internal} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:39 +0000 UTC - event for webserver-dd94f59b7-pwpd6: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
... skipping 7 lines ...
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for webserver-dd94f59b7-992cd: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Created: Created container httpd
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for webserver-dd94f59b7-bbhgv: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-bbhgv to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for webserver-dd94f59b7-bbhgv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:40 +0000 UTC - event for webserver-dd94f59b7-nml27: {kubelet ip-172-20-62-163.ap-northeast-2.compute.internal} Started: Started container httpd
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:41 +0000 UTC - event for webserver-dd94f59b7-jlnqd: {default-scheduler } Scheduled: Successfully assigned deployment-5691/webserver-dd94f59b7-jlnqd to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:42 +0000 UTC - event for webserver-dd94f59b7-jlnqd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:43 +0000 UTC - event for webserver-dd94f59b7-8xcz5: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Failed: Error: cannot find volume "default-token-dvxhl" to mount into container "httpd"
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:43 +0000 UTC - event for webserver-dd94f59b7-8xcz5: {kubelet ip-172-20-61-11.ap-northeast-2.compute.internal} Pulled: Successfully pulled image "docker.io/library/httpd:2.4.38-alpine" in 15.239161449s
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:52 +0000 UTC - event for webserver-dd94f59b7-28w2z: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:41:52 +0000 UTC - event for webserver-dd94f59b7-28w2z: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:42:03 +0000 UTC - event for webserver-dd94f59b7-lnc7q: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:42:03 +0000 UTC - event for webserver-dd94f59b7-lnc7q: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:42:14 +0000 UTC - event for webserver-dd94f59b7-vjcrw: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:42:14 +0000 UTC - event for webserver-dd94f59b7-vjcrw: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:11 +0000 UTC - event for webserver-dd94f59b7-nthvp: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:11 +0000 UTC - event for webserver-dd94f59b7-nthvp: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:19 +0000 UTC - event for webserver-dd94f59b7-hm2f6: {kubelet ip-172-20-62-163.ap-northeast-2.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[default-token-dvxhl], unattached volumes=[default-token-dvxhl]: timed out waiting for the condition
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:34 +0000 UTC - event for webserver-dd94f59b7-pwpd6: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:34 +0000 UTC - event for webserver-dd94f59b7-pwpd6: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:46 +0000 UTC - event for webserver-dd94f59b7-4vdcb: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:46 +0000 UTC - event for webserver-dd94f59b7-4vdcb: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:47 +0000 UTC - event for webserver-dd94f59b7-4vdcb: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:47 +0000 UTC - event for webserver-dd94f59b7-4vdcb: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:57 +0000 UTC - event for webserver-dd94f59b7-bbhgv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:43:57 +0000 UTC - event for webserver-dd94f59b7-bbhgv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:44:09 +0000 UTC - event for webserver-dd94f59b7-jlnqd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:44:09 +0000 UTC - event for webserver-dd94f59b7-jlnqd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/httpd:2.4.38-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/httpd:2.4.38-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/httpd/manifests/sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:44:10 +0000 UTC - event for webserver-dd94f59b7-jlnqd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 00:46:43.534: INFO: At 2021-08-26 00:44:10 +0000 UTC - event for webserver-dd94f59b7-jlnqd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/httpd:2.4.38-alpine"
Aug 26 00:46:43.693: INFO: POD                        NODE                                              PHASE    GRACE  CONDITIONS
Aug 26 00:46:43.693: INFO: webserver-dd94f59b7-4vdcb  ip-172-20-62-60.ap-northeast-2.compute.internal   Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC  }]
Aug 26 00:46:43.693: INFO: webserver-dd94f59b7-59cld  ip-172-20-62-163.ap-northeast-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:41 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC  }]
Aug 26 00:46:43.693: INFO: webserver-dd94f59b7-992cd  ip-172-20-61-11.ap-northeast-2.compute.internal   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:39 +0000 UTC  }]
Aug 26 00:46:43.693: INFO: webserver-dd94f59b7-cf8bw  ip-172-20-62-163.ap-northeast-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:41:22 +0000 UTC  }]
... skipping 57133 lines ...
Aug 26 01:00:47.558: INFO: PersistentVolumeClaim pvc-c7dms found but phase is Pending instead of Bound.
Aug 26 01:00:49.719: INFO: PersistentVolumeClaim pvc-c7dms found and phase=Bound (2.321046224s)
Aug 26 01:00:49.719: INFO: Waiting up to 3m0s for PersistentVolume local-tzmdq to have phase Bound
Aug 26 01:00:49.879: INFO: PersistentVolume local-tzmdq found and phase=Bound (160.372057ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-gvqb
STEP: Creating a pod to test exec-volume-test
Aug 26 01:00:50.361: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-gvqb" in namespace "volume-4584" to be "Succeeded or Failed"
Aug 26 01:00:50.521: INFO: Pod "exec-volume-test-preprovisionedpv-gvqb": Phase="Pending", Reason="", readiness=false. Elapsed: 160.310301ms
Aug 26 01:00:52.682: INFO: Pod "exec-volume-test-preprovisionedpv-gvqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.321681816s
STEP: Saw pod success
Aug 26 01:00:52.682: INFO: Pod "exec-volume-test-preprovisionedpv-gvqb" satisfied condition "Succeeded or Failed"
Aug 26 01:00:52.843: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-gvqb container exec-container-preprovisionedpv-gvqb: <nil>
STEP: delete the pod
Aug 26 01:00:53.170: INFO: Waiting for pod exec-volume-test-preprovisionedpv-gvqb to disappear
Aug 26 01:00:53.330: INFO: Pod exec-volume-test-preprovisionedpv-gvqb no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-gvqb
Aug 26 01:00:53.330: INFO: Deleting pod "exec-volume-test-preprovisionedpv-gvqb" in namespace "volume-4584"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":18,"skipped":171,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume"]}

S
------------------------------
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:00:55.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:124
Aug 26 01:00:56.675: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-5928" to be "Succeeded or Failed"
Aug 26 01:00:56.833: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 158.061367ms
Aug 26 01:00:58.992: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317134584s
Aug 26 01:00:58.993: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:00:59.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5928" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":22,"skipped":155,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:00:59.501: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 82 lines ...
Aug 26 01:00:47.284: INFO: PersistentVolumeClaim pvc-lskx8 found but phase is Pending instead of Bound.
Aug 26 01:00:49.444: INFO: PersistentVolumeClaim pvc-lskx8 found and phase=Bound (10.965073098s)
Aug 26 01:00:49.444: INFO: Waiting up to 3m0s for PersistentVolume local-lwpbd to have phase Bound
Aug 26 01:00:49.604: INFO: PersistentVolume local-lwpbd found and phase=Bound (159.65602ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j98m
STEP: Creating a pod to test subpath
Aug 26 01:00:50.085: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j98m" in namespace "provisioning-1393" to be "Succeeded or Failed"
Aug 26 01:00:50.245: INFO: Pod "pod-subpath-test-preprovisionedpv-j98m": Phase="Pending", Reason="", readiness=false. Elapsed: 159.760265ms
Aug 26 01:00:52.405: INFO: Pod "pod-subpath-test-preprovisionedpv-j98m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320105538s
Aug 26 01:00:54.565: INFO: Pod "pod-subpath-test-preprovisionedpv-j98m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.48028553s
STEP: Saw pod success
Aug 26 01:00:54.565: INFO: Pod "pod-subpath-test-preprovisionedpv-j98m" satisfied condition "Succeeded or Failed"
Aug 26 01:00:54.725: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-j98m container test-container-subpath-preprovisionedpv-j98m: <nil>
STEP: delete the pod
Aug 26 01:00:55.052: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j98m to disappear
Aug 26 01:00:55.213: INFO: Pod pod-subpath-test-preprovisionedpv-j98m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j98m
Aug 26 01:00:55.213: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j98m" in namespace "provisioning-1393"
STEP: Creating pod pod-subpath-test-preprovisionedpv-j98m
STEP: Creating a pod to test subpath
Aug 26 01:00:55.535: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j98m" in namespace "provisioning-1393" to be "Succeeded or Failed"
Aug 26 01:00:55.695: INFO: Pod "pod-subpath-test-preprovisionedpv-j98m": Phase="Pending", Reason="", readiness=false. Elapsed: 159.636185ms
Aug 26 01:00:57.856: INFO: Pod "pod-subpath-test-preprovisionedpv-j98m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.319980366s
STEP: Saw pod success
Aug 26 01:00:57.856: INFO: Pod "pod-subpath-test-preprovisionedpv-j98m" satisfied condition "Succeeded or Failed"
Aug 26 01:00:58.016: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-j98m container test-container-subpath-preprovisionedpv-j98m: <nil>
STEP: delete the pod
Aug 26 01:00:58.343: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j98m to disappear
Aug 26 01:00:58.503: INFO: Pod pod-subpath-test-preprovisionedpv-j98m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j98m
Aug 26 01:00:58.503: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j98m" in namespace "provisioning-1393"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":39,"skipped":282,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:00.696: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":32,"skipped":178,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:01.196: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-b21fa1e9-0af7-40ee-a6be-b9dc4e25bbb1
STEP: Creating a pod to test consume configMaps
Aug 26 01:00:58.713: INFO: Waiting up to 5m0s for pod "pod-configmaps-83194c07-9b36-460d-bcc4-08cf7d73318b" in namespace "configmap-774" to be "Succeeded or Failed"
Aug 26 01:00:58.871: INFO: Pod "pod-configmaps-83194c07-9b36-460d-bcc4-08cf7d73318b": Phase="Pending", Reason="", readiness=false. Elapsed: 158.25262ms
Aug 26 01:01:01.030: INFO: Pod "pod-configmaps-83194c07-9b36-460d-bcc4-08cf7d73318b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.316759425s
STEP: Saw pod success
Aug 26 01:01:01.030: INFO: Pod "pod-configmaps-83194c07-9b36-460d-bcc4-08cf7d73318b" satisfied condition "Succeeded or Failed"
Aug 26 01:01:01.190: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-configmaps-83194c07-9b36-460d-bcc4-08cf7d73318b container configmap-volume-test: <nil>
STEP: delete the pod
Aug 26 01:01:01.513: INFO: Waiting for pod pod-configmaps-83194c07-9b36-460d-bcc4-08cf7d73318b to disappear
Aug 26 01:01:01.672: INFO: Pod pod-configmaps-83194c07-9b36-460d-bcc4-08cf7d73318b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:01.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-774" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":222,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:02.034: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 113 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 01:01:00.489: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ecb6c76-bad5-4f28-b056-fea860efaafc" in namespace "projected-785" to be "Succeeded or Failed"
Aug 26 01:01:00.647: INFO: Pod "downwardapi-volume-6ecb6c76-bad5-4f28-b056-fea860efaafc": Phase="Pending", Reason="", readiness=false. Elapsed: 158.493202ms
Aug 26 01:01:02.806: INFO: Pod "downwardapi-volume-6ecb6c76-bad5-4f28-b056-fea860efaafc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317514137s
STEP: Saw pod success
Aug 26 01:01:02.806: INFO: Pod "downwardapi-volume-6ecb6c76-bad5-4f28-b056-fea860efaafc" satisfied condition "Succeeded or Failed"
Aug 26 01:01:02.965: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod downwardapi-volume-6ecb6c76-bad5-4f28-b056-fea860efaafc container client-container: <nil>
STEP: delete the pod
Aug 26 01:01:03.303: INFO: Waiting for pod downwardapi-volume-6ecb6c76-bad5-4f28-b056-fea860efaafc to disappear
Aug 26 01:01:03.462: INFO: Pod downwardapi-volume-6ecb6c76-bad5-4f28-b056-fea860efaafc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:03.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-785" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":162,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:03.820: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 01:01:01.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30a2eb0b-44ed-4cd3-8e5b-5cb7ef53090d" in namespace "projected-9623" to be "Succeeded or Failed"
Aug 26 01:01:01.854: INFO: Pod "downwardapi-volume-30a2eb0b-44ed-4cd3-8e5b-5cb7ef53090d": Phase="Pending", Reason="", readiness=false. Elapsed: 160.080166ms
Aug 26 01:01:04.014: INFO: Pod "downwardapi-volume-30a2eb0b-44ed-4cd3-8e5b-5cb7ef53090d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.320140302s
STEP: Saw pod success
Aug 26 01:01:04.014: INFO: Pod "downwardapi-volume-30a2eb0b-44ed-4cd3-8e5b-5cb7ef53090d" satisfied condition "Succeeded or Failed"
Aug 26 01:01:04.174: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod downwardapi-volume-30a2eb0b-44ed-4cd3-8e5b-5cb7ef53090d container client-container: <nil>
STEP: delete the pod
Aug 26 01:01:04.502: INFO: Waiting for pod downwardapi-volume-30a2eb0b-44ed-4cd3-8e5b-5cb7ef53090d to disappear
Aug 26 01:01:04.661: INFO: Pod downwardapi-volume-30a2eb0b-44ed-4cd3-8e5b-5cb7ef53090d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:04.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9623" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":291,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:04.993: INFO: Driver windows-gcepd doesn't support  -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Aug 26 01:01:03.088: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8334" to be "Succeeded or Failed"
Aug 26 01:01:03.247: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 158.434229ms
Aug 26 01:01:05.406: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317093459s
STEP: Saw pod success
Aug 26 01:01:05.406: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Aug 26 01:01:05.564: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Aug 26 01:01:05.888: INFO: Waiting for pod pod-host-path-test to disappear
Aug 26 01:01:06.046: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:06.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8334" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":28,"skipped":246,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 93 lines ...
Aug 26 01:00:37.671: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-6860-aws-scpvj54
STEP: creating a claim
Aug 26 01:00:37.835: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Aug 26 01:00:38.166: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
Aug 26 01:00:38.493: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:40.827: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:42.821: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:44.822: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:46.821: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:48.824: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:50.821: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:52.822: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:54.822: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:56.823: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:00:58.821: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:01:00.822: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:01:02.821: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:01:04.822: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:01:06.828: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:01:08.823: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-6860-aws-scpvj54",
  	... // 2 identical fields
  }

Aug 26 01:01:09.151: INFO: Error updating pvc awskbqpj: PersistentVolumeClaim "awskbqpj" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":38,"skipped":278,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:10.000: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 66 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-7e1f7e4a-5c64-4d23-a642-423cf9f575ce
STEP: Creating a pod to test consume secrets
Aug 26 01:01:07.498: INFO: Waiting up to 5m0s for pod "pod-secrets-1b8a5c10-9a20-4559-aacd-ebd5258af64c" in namespace "secrets-1984" to be "Succeeded or Failed"
Aug 26 01:01:07.657: INFO: Pod "pod-secrets-1b8a5c10-9a20-4559-aacd-ebd5258af64c": Phase="Pending", Reason="", readiness=false. Elapsed: 158.640567ms
Aug 26 01:01:09.816: INFO: Pod "pod-secrets-1b8a5c10-9a20-4559-aacd-ebd5258af64c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.317403447s
STEP: Saw pod success
Aug 26 01:01:09.816: INFO: Pod "pod-secrets-1b8a5c10-9a20-4559-aacd-ebd5258af64c" satisfied condition "Succeeded or Failed"
Aug 26 01:01:09.974: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-secrets-1b8a5c10-9a20-4559-aacd-ebd5258af64c container secret-volume-test: <nil>
STEP: delete the pod
Aug 26 01:01:10.298: INFO: Waiting for pod pod-secrets-1b8a5c10-9a20-4559-aacd-ebd5258af64c to disappear
Aug 26 01:01:10.456: INFO: Pod pod-secrets-1b8a5c10-9a20-4559-aacd-ebd5258af64c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:10.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1984" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":248,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:10.789: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 100 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
SSS
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":26,"skipped":204,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source"]}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:00:58.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":27,"skipped":204,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:14.470: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":17,"skipped":106,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:01:10.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 26 01:01:11.768: INFO: Waiting up to 5m0s for pod "pod-6f85466a-94d7-481c-a0e4-547eb8e1641b" in namespace "emptydir-9753" to be "Succeeded or Failed"
Aug 26 01:01:11.924: INFO: Pod "pod-6f85466a-94d7-481c-a0e4-547eb8e1641b": Phase="Pending", Reason="", readiness=false. Elapsed: 156.144476ms
Aug 26 01:01:14.081: INFO: Pod "pod-6f85466a-94d7-481c-a0e4-547eb8e1641b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.312644321s
STEP: Saw pod success
Aug 26 01:01:14.081: INFO: Pod "pod-6f85466a-94d7-481c-a0e4-547eb8e1641b" satisfied condition "Succeeded or Failed"
Aug 26 01:01:14.238: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-6f85466a-94d7-481c-a0e4-547eb8e1641b container test-container: <nil>
STEP: delete the pod
Aug 26 01:01:14.556: INFO: Waiting for pod pod-6f85466a-94d7-481c-a0e4-547eb8e1641b to disappear
Aug 26 01:01:14.712: INFO: Pod pod-6f85466a-94d7-481c-a0e4-547eb8e1641b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:14.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9753" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":106,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:15.037: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 1657 lines ...
• [SLOW TEST:45.241 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:199
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":15,"skipped":84,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:17.582: INFO: Only supported for providers [azure] (not aws)
... skipping 59 lines ...
• [SLOW TEST:7.230 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":89,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:24.853: INFO: Only supported for providers [azure] (not aws)
... skipping 40 lines ...
Aug 26 01:01:17.243: INFO: PersistentVolumeClaim pvc-d42cn found but phase is Pending instead of Bound.
Aug 26 01:01:19.407: INFO: PersistentVolumeClaim pvc-d42cn found and phase=Bound (4.491720019s)
Aug 26 01:01:19.407: INFO: Waiting up to 3m0s for PersistentVolume local-p9q8n to have phase Bound
Aug 26 01:01:19.571: INFO: PersistentVolume local-p9q8n found and phase=Bound (163.720666ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-fqsj
STEP: Creating a pod to test exec-volume-test
Aug 26 01:01:20.063: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-fqsj" in namespace "volume-4097" to be "Succeeded or Failed"
Aug 26 01:01:20.227: INFO: Pod "exec-volume-test-preprovisionedpv-fqsj": Phase="Pending", Reason="", readiness=false. Elapsed: 163.924141ms
Aug 26 01:01:22.391: INFO: Pod "exec-volume-test-preprovisionedpv-fqsj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.328295383s
STEP: Saw pod success
Aug 26 01:01:22.391: INFO: Pod "exec-volume-test-preprovisionedpv-fqsj" satisfied condition "Succeeded or Failed"
Aug 26 01:01:22.558: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod exec-volume-test-preprovisionedpv-fqsj container exec-container-preprovisionedpv-fqsj: <nil>
STEP: delete the pod
Aug 26 01:01:22.895: INFO: Waiting for pod exec-volume-test-preprovisionedpv-fqsj to disappear
Aug 26 01:01:23.059: INFO: Pod exec-volume-test-preprovisionedpv-fqsj no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-fqsj
Aug 26 01:01:23.059: INFO: Deleting pod "exec-volume-test-preprovisionedpv-fqsj" in namespace "volume-4097"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":39,"skipped":288,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:26.168: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 64 lines ...
Aug 26 01:01:16.460: INFO: PersistentVolumeClaim pvc-cbm67 found but phase is Pending instead of Bound.
Aug 26 01:01:18.620: INFO: PersistentVolumeClaim pvc-cbm67 found and phase=Bound (15.285240173s)
Aug 26 01:01:18.620: INFO: Waiting up to 3m0s for PersistentVolume local-c82sx to have phase Bound
Aug 26 01:01:18.781: INFO: PersistentVolume local-c82sx found and phase=Bound (160.225734ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fpm7
STEP: Creating a pod to test subpath
Aug 26 01:01:19.264: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fpm7" in namespace "provisioning-687" to be "Succeeded or Failed"
Aug 26 01:01:19.425: INFO: Pod "pod-subpath-test-preprovisionedpv-fpm7": Phase="Pending", Reason="", readiness=false. Elapsed: 160.275444ms
Aug 26 01:01:21.585: INFO: Pod "pod-subpath-test-preprovisionedpv-fpm7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320939977s
Aug 26 01:01:23.747: INFO: Pod "pod-subpath-test-preprovisionedpv-fpm7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.482650242s
STEP: Saw pod success
Aug 26 01:01:23.747: INFO: Pod "pod-subpath-test-preprovisionedpv-fpm7" satisfied condition "Succeeded or Failed"
Aug 26 01:01:23.908: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-fpm7 container test-container-volume-preprovisionedpv-fpm7: <nil>
STEP: delete the pod
Aug 26 01:01:24.237: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fpm7 to disappear
Aug 26 01:01:24.398: INFO: Pod pod-subpath-test-preprovisionedpv-fpm7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fpm7
Aug 26 01:01:24.399: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fpm7" in namespace "provisioning-687"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":19,"skipped":172,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:26.594: INFO: Driver vsphere doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 38 lines ...
Aug 26 01:01:16.971: INFO: PersistentVolumeClaim pvc-2zvjf found but phase is Pending instead of Bound.
Aug 26 01:01:19.130: INFO: PersistentVolumeClaim pvc-2zvjf found and phase=Bound (13.115469571s)
Aug 26 01:01:19.130: INFO: Waiting up to 3m0s for PersistentVolume local-hhkzq to have phase Bound
Aug 26 01:01:19.289: INFO: PersistentVolume local-hhkzq found and phase=Bound (158.726562ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-m6r4
STEP: Creating a pod to test subpath
Aug 26 01:01:19.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-m6r4" in namespace "provisioning-9388" to be "Succeeded or Failed"
Aug 26 01:01:19.929: INFO: Pod "pod-subpath-test-preprovisionedpv-m6r4": Phase="Pending", Reason="", readiness=false. Elapsed: 158.759821ms
Aug 26 01:01:22.089: INFO: Pod "pod-subpath-test-preprovisionedpv-m6r4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318124544s
Aug 26 01:01:24.248: INFO: Pod "pod-subpath-test-preprovisionedpv-m6r4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.477320823s
STEP: Saw pod success
Aug 26 01:01:24.248: INFO: Pod "pod-subpath-test-preprovisionedpv-m6r4" satisfied condition "Succeeded or Failed"
Aug 26 01:01:24.407: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-m6r4 container test-container-volume-preprovisionedpv-m6r4: <nil>
STEP: delete the pod
Aug 26 01:01:24.732: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-m6r4 to disappear
Aug 26 01:01:24.894: INFO: Pod pod-subpath-test-preprovisionedpv-m6r4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-m6r4
Aug 26 01:01:24.894: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-m6r4" in namespace "provisioning-9388"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":33,"skipped":180,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:28.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1708" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":40,"skipped":292,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:28.536: INFO: Only supported for providers [openstack] (not aws)
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Aug 26 01:01:25.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7aa851f-836a-460a-a286-282d4cf1c3c6" in namespace "downward-api-7947" to be "Succeeded or Failed"
Aug 26 01:01:25.994: INFO: Pod "downwardapi-volume-d7aa851f-836a-460a-a286-282d4cf1c3c6": Phase="Pending", Reason="", readiness=false. Elapsed: 158.886856ms
Aug 26 01:01:28.153: INFO: Pod "downwardapi-volume-d7aa851f-836a-460a-a286-282d4cf1c3c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.318391636s
STEP: Saw pod success
Aug 26 01:01:28.153: INFO: Pod "downwardapi-volume-d7aa851f-836a-460a-a286-282d4cf1c3c6" satisfied condition "Succeeded or Failed"
Aug 26 01:01:28.312: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod downwardapi-volume-d7aa851f-836a-460a-a286-282d4cf1c3c6 container client-container: <nil>
STEP: delete the pod
Aug 26 01:01:28.637: INFO: Waiting for pod downwardapi-volume-d7aa851f-836a-460a-a286-282d4cf1c3c6 to disappear
Aug 26 01:01:28.796: INFO: Pod downwardapi-volume-d7aa851f-836a-460a-a286-282d4cf1c3c6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:28.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7947" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":94,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it"]}

SS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:01:27.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap configmap-1224/configmap-test-e9321b77-4cbb-4236-aa58-d44cbba8118d
STEP: Creating a pod to test consume configMaps
Aug 26 01:01:28.193: INFO: Waiting up to 5m0s for pod "pod-configmaps-913f4afc-cb74-4d1d-9743-473549377edb" in namespace "configmap-1224" to be "Succeeded or Failed"
Aug 26 01:01:28.354: INFO: Pod "pod-configmaps-913f4afc-cb74-4d1d-9743-473549377edb": Phase="Pending", Reason="", readiness=false. Elapsed: 160.410771ms
Aug 26 01:01:30.513: INFO: Pod "pod-configmaps-913f4afc-cb74-4d1d-9743-473549377edb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.319586984s
STEP: Saw pod success
Aug 26 01:01:30.513: INFO: Pod "pod-configmaps-913f4afc-cb74-4d1d-9743-473549377edb" satisfied condition "Succeeded or Failed"
Aug 26 01:01:30.672: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-configmaps-913f4afc-cb74-4d1d-9743-473549377edb container env-test: <nil>
STEP: delete the pod
Aug 26 01:01:30.997: INFO: Waiting for pod pod-configmaps-913f4afc-cb74-4d1d-9743-473549377edb to disappear
Aug 26 01:01:31.155: INFO: Pod pod-configmaps-913f4afc-cb74-4d1d-9743-473549377edb no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":19,"skipped":107,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:31.520: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 115 lines ...
• [SLOW TEST:5.962 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1967
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":41,"skipped":304,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:34.575: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 87 lines ...
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
Aug 26 01:01:38.431: INFO: Waiting up to 5m0s for pod "client-envvars-0fcc35c5-4f33-4776-841b-01b74a96cf17" in namespace "pods-8097" to be "Succeeded or Failed"
Aug 26 01:01:38.595: INFO: Pod "client-envvars-0fcc35c5-4f33-4776-841b-01b74a96cf17": Phase="Pending", Reason="", readiness=false. Elapsed: 163.669365ms
Aug 26 01:01:40.759: INFO: Pod "client-envvars-0fcc35c5-4f33-4776-841b-01b74a96cf17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.327658065s
STEP: Saw pod success
Aug 26 01:01:40.759: INFO: Pod "client-envvars-0fcc35c5-4f33-4776-841b-01b74a96cf17" satisfied condition "Succeeded or Failed"
Aug 26 01:01:40.923: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod client-envvars-0fcc35c5-4f33-4776-841b-01b74a96cf17 container env3cont: <nil>
STEP: delete the pod
Aug 26 01:01:41.306: INFO: Waiting for pod client-envvars-0fcc35c5-4f33-4776-841b-01b74a96cf17 to disappear
Aug 26 01:01:41.470: INFO: Pod client-envvars-0fcc35c5-4f33-4776-841b-01b74a96cf17 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.183 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":313,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:41.832: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 232 lines ...
• [SLOW TEST:198.646 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:137
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":13,"skipped":87,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

S
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":75,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:00:14.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:95.180 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":75,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:49.717: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 173 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:50.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8295" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":14,"skipped":88,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:50.762: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
STEP: Destroying namespace "services-4272" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":12,"skipped":99,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:51.094: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 169 lines ...
Aug 26 01:01:41.062: INFO: Waiting for pod aws-client to disappear
Aug 26 01:01:41.220: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Aug 26 01:01:41.220: INFO: Deleting PersistentVolumeClaim "pvc-k42qj"
Aug 26 01:01:41.376: INFO: Deleting PersistentVolume "aws-7zpfk"
Aug 26 01:01:42.375: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0d75edb2e532c3380", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d75edb2e532c3380 is currently attached to i-096c5d8c993f8b9a9
	status code: 400, request id: 461b050c-8821-4459-9c73-fc2c8079a7a1
Aug 26 01:01:48.194: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0d75edb2e532c3380", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d75edb2e532c3380 is currently attached to i-096c5d8c993f8b9a9
	status code: 400, request id: 48fa234b-c2e1-457d-b726-c6d6f2f1f2a2
Aug 26 01:01:54.021: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-0d75edb2e532c3380".
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:54.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7031" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (ext3)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data","total":-1,"completed":30,"skipped":267,"failed":1,"failures":["[k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:54.344: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 64 lines ...
Aug 26 01:01:51.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 26 01:01:52.081: INFO: Waiting up to 5m0s for pod "pod-15682fe0-8d61-4418-b331-05867fb040fd" in namespace "emptydir-8856" to be "Succeeded or Failed"
Aug 26 01:01:52.237: INFO: Pod "pod-15682fe0-8d61-4418-b331-05867fb040fd": Phase="Pending", Reason="", readiness=false. Elapsed: 155.691784ms
Aug 26 01:01:54.393: INFO: Pod "pod-15682fe0-8d61-4418-b331-05867fb040fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.311747609s
STEP: Saw pod success
Aug 26 01:01:54.393: INFO: Pod "pod-15682fe0-8d61-4418-b331-05867fb040fd" satisfied condition "Succeeded or Failed"
Aug 26 01:01:54.548: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-15682fe0-8d61-4418-b331-05867fb040fd container test-container: <nil>
STEP: delete the pod
Aug 26 01:01:54.872: INFO: Waiting for pod pod-15682fe0-8d61-4418-b331-05867fb040fd to disappear
Aug 26 01:01:55.028: INFO: Pod pod-15682fe0-8d61-4418-b331-05867fb040fd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:55.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8856" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":106,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:55.373: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361

      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:832
------------------------------
{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":15,"skipped":105,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:01:52.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:01:56.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-751" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":105,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:01:56.887: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":43,"skipped":321,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":120,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:35.539 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:124
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":20,"skipped":173,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:02.153: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 38 lines ...
Aug 26 00:56:46.693: INFO: PersistentVolumeClaim pvc-6dkcc found but phase is Pending instead of Bound.
Aug 26 00:56:48.849: INFO: PersistentVolumeClaim pvc-6dkcc found and phase=Bound (8.784372877s)
Aug 26 00:56:48.849: INFO: Waiting up to 3m0s for PersistentVolume local-r5csb to have phase Bound
Aug 26 00:56:49.006: INFO: PersistentVolume local-r5csb found and phase=Bound (156.224458ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rk4f
STEP: Creating a pod to test subpath
Aug 26 00:56:49.482: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rk4f" in namespace "provisioning-8830" to be "Succeeded or Failed"
Aug 26 00:56:49.639: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 156.289078ms
Aug 26 00:56:51.796: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313219167s
Aug 26 00:56:53.952: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.470133619s
Aug 26 00:56:56.109: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626713163s
Aug 26 00:56:58.266: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.783247163s
Aug 26 00:57:00.423: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.940424306s
... skipping 128 lines ...
Aug 26 01:01:38.699: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.21645554s
Aug 26 01:01:40.855: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.372803215s
Aug 26 01:01:43.012: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.529321726s
Aug 26 01:01:45.168: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.686082388s
Aug 26 01:01:47.325: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.842693617s
Aug 26 01:01:49.482: INFO: Pod "pod-subpath-test-preprovisionedpv-rk4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.999219756s
Aug 26 01:01:51.817: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-rk4f" container "init-volume-preprovisionedpv-rk4f": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-rk4f)
Aug 26 01:01:51.974: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-rk4f" container "test-init-subpath-preprovisionedpv-rk4f": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-rk4f)
Aug 26 01:01:52.132: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-rk4f" container "test-container-subpath-preprovisionedpv-rk4f": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-rk4f)
Aug 26 01:01:52.289: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-rk4f" container "test-container-volume-preprovisionedpv-rk4f": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-rk4f)
STEP: delete the pod
Aug 26 01:01:52.446: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rk4f to disappear
Aug 26 01:01:52.602: INFO: Pod pod-subpath-test-preprovisionedpv-rk4f still exists
Aug 26 01:01:54.603: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rk4f to disappear
Aug 26 01:01:54.759: INFO: Pod pod-subpath-test-preprovisionedpv-rk4f no longer exists
Aug 26 01:01:54.759: FAIL: Unexpected error:
    <*errors.errorString | 0xc001b94640>: {
        s: "expected pod \"pod-subpath-test-preprovisionedpv-rk4f\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-preprovisionedpv-rk4f\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-preprovisionedpv-rk4f" success: Gave up after waiting 5m0s for pod "pod-subpath-test-preprovisionedpv-rk4f" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc002788580, 0x4bf718b, 0x7, 0xc0020d0000, 0x1, 0xc0011cb130, 0x1, 0x1, 0x4df0110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 27 lines ...
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:56:36 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-44htv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.20" already present on machine
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:56:36 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-44htv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Created: Created container agnhost-container
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:56:36 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-44htv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Started: Started container agnhost-container
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:56:39 +0000 UTC - event for pvc-6dkcc: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "provisioning-8830" not found
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:56:49 +0000 UTC - event for pod-subpath-test-preprovisionedpv-rk4f: {default-scheduler } Scheduled: Successfully assigned provisioning-8830/pod-subpath-test-preprovisionedpv-rk4f to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:56:50 +0000 UTC - event for pod-subpath-test-preprovisionedpv-rk4f: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:58:49 +0000 UTC - event for pod-subpath-test-preprovisionedpv-rk4f: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:58:49 +0000 UTC - event for pod-subpath-test-preprovisionedpv-rk4f: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:58:49 +0000 UTC - event for pod-subpath-test-preprovisionedpv-rk4f: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:01:56.601: INFO: At 2021-08-26 00:58:49 +0000 UTC - event for pod-subpath-test-preprovisionedpv-rk4f: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 01:01:56.601: INFO: At 2021-08-26 01:01:56 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-44htv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Killing: Stopping container agnhost-container
Aug 26 01:01:56.757: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 01:01:56.757: INFO: 
Aug 26 01:01:56.914: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
... skipping 140 lines ...
Aug 26 01:02:01.507: INFO: 	Container agnhost-container ready: true, restart count 0
Aug 26 01:02:01.507: INFO: pod-subpath-test-dynamicpv-bpcb started at 2021-08-26 00:59:45 +0000 UTC (2+2 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Init container init-volume-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:01.507: INFO: 	Init container test-init-subpath-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:01.507: INFO: 	Container test-container-subpath-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:01.507: INFO: 	Container test-container-volume-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:01.507: INFO: fail-once-non-local-n99cw started at 2021-08-26 00:48:09 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container c ready: false, restart count 0
Aug 26 01:02:01.507: INFO: csi-hostpath-attacher-0 started at 2021-08-26 00:59:40 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container csi-attacher ready: true, restart count 0
Aug 26 01:02:01.507: INFO: csi-hostpathplugin-0 started at 2021-08-26 00:59:41 +0000 UTC (0+3 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container hostpath ready: true, restart count 0
Aug 26 01:02:01.507: INFO: 	Container liveness-probe ready: true, restart count 0
Aug 26 01:02:01.507: INFO: 	Container node-driver-registrar ready: true, restart count 0
Aug 26 01:02:01.507: INFO: csi-hostpath-snapshotter-0 started at 2021-08-26 00:59:42 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container csi-snapshotter ready: true, restart count 0
Aug 26 01:02:01.507: INFO: hostpath-symlink-prep-provisioning-3543 started at 2021-08-26 00:57:22 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container init-volume-provisioning-3543 ready: false, restart count 0
Aug 26 01:02:01.507: INFO: fail-once-non-local-mgnnr started at 2021-08-26 00:48:09 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container c ready: false, restart count 0
Aug 26 01:02:01.507: INFO: csi-hostpath-attacher-0 started at 2021-08-26 00:48:39 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container csi-attacher ready: true, restart count 0
Aug 26 01:02:01.507: INFO: deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7 started at 2021-08-26 01:01:18 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:01.507: INFO: 	Container write-pod ready: false, restart count 0
Aug 26 01:02:01.507: INFO: csi-hostpath-provisioner-0 started at 2021-08-26 00:59:41 +0000 UTC (0+1 container statuses recorded)
... skipping 43 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202

      Aug 26 01:01:54.759: Unexpected error:
          <*errors.errorString | 0xc001b94640>: {
              s: "expected pod \"pod-subpath-test-preprovisionedpv-rk4f\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-preprovisionedpv-rk4f\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-preprovisionedpv-rk4f" success: Gave up after waiting 5m0s for pod "pod-subpath-test-preprovisionedpv-rk4f" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":22,"skipped":176,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 8 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 26 01:02:01.329: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6ac19339-79d5-4485-bc61-d1691ca2e7f6"
Aug 26 01:02:01.329: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6ac19339-79d5-4485-bc61-d1691ca2e7f6" in namespace "pods-9332" to be "terminated due to deadline exceeded"
Aug 26 01:02:01.488: INFO: Pod "pod-update-activedeadlineseconds-6ac19339-79d5-4485-bc61-d1691ca2e7f6": Phase="Running", Reason="", readiness=true. Elapsed: 159.199021ms
Aug 26 01:02:03.648: INFO: Pod "pod-update-activedeadlineseconds-6ac19339-79d5-4485-bc61-d1691ca2e7f6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.318854087s
Aug 26 01:02:03.648: INFO: Pod "pod-update-activedeadlineseconds-6ac19339-79d5-4485-bc61-d1691ca2e7f6" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:02:03.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9332" for this suite.


• [SLOW TEST:7.062 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":113,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:03.986: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 117 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:252
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:274
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":28,"skipped":211,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source"]}

S
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":184,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:01:31.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:37.824 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:75
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":35,"skipped":184,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:09.320: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":31,"skipped":272,"failed":1,"failures":["[k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:10.837: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
• [SLOW TEST:13.318 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:516
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":-1,"completed":44,"skipped":322,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:11.819: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 94 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:02:13.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2447" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":32,"skipped":275,"failed":1,"failures":["[k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:13.684: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 211 lines ...
Aug 26 01:02:11.593: INFO: Waiting for pod aws-client to disappear
Aug 26 01:02:11.753: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Aug 26 01:02:11.753: INFO: Deleting PersistentVolumeClaim "pvc-jqttg"
Aug 26 01:02:11.915: INFO: Deleting PersistentVolume "aws-b62lj"
Aug 26 01:02:12.386: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0683158d7e2cd94dd", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0683158d7e2cd94dd is currently attached to i-096c5d8c993f8b9a9
	status code: 400, request id: 0eb53806-fff0-484d-b271-640204278bac
Aug 26 01:02:18.186: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-0683158d7e2cd94dd".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:02:18.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1399" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":41,"skipped":292,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:18.541: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 70 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
Aug 26 01:02:12.751: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 26 01:02:12.917: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-frhx
STEP: Creating a pod to test subpath
Aug 26 01:02:13.083: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-frhx" in namespace "provisioning-3325" to be "Succeeded or Failed"
Aug 26 01:02:13.248: INFO: Pod "pod-subpath-test-inlinevolume-frhx": Phase="Pending", Reason="", readiness=false. Elapsed: 163.988311ms
Aug 26 01:02:15.412: INFO: Pod "pod-subpath-test-inlinevolume-frhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.328140832s
Aug 26 01:02:17.576: INFO: Pod "pod-subpath-test-inlinevolume-frhx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.492398702s
STEP: Saw pod success
Aug 26 01:02:17.576: INFO: Pod "pod-subpath-test-inlinevolume-frhx" satisfied condition "Succeeded or Failed"
Aug 26 01:02:17.740: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-subpath-test-inlinevolume-frhx container test-container-subpath-inlinevolume-frhx: <nil>
STEP: delete the pod
Aug 26 01:02:18.073: INFO: Waiting for pod pod-subpath-test-inlinevolume-frhx to disappear
Aug 26 01:02:18.237: INFO: Pod pod-subpath-test-inlinevolume-frhx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-frhx
Aug 26 01:02:18.237: INFO: Deleting pod "pod-subpath-test-inlinevolume-frhx" in namespace "provisioning-3325"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":45,"skipped":343,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:18.919: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":14,"skipped":112,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:02:13.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:5.569 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:48
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":15,"skipped":112,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
Aug 26 01:02:15.588: INFO: PersistentVolumeClaim pvc-bkgjq found but phase is Pending instead of Bound.
Aug 26 01:02:17.745: INFO: PersistentVolumeClaim pvc-bkgjq found and phase=Bound (10.943737986s)
Aug 26 01:02:17.745: INFO: Waiting up to 3m0s for PersistentVolume local-jwt9c to have phase Bound
Aug 26 01:02:17.901: INFO: PersistentVolume local-jwt9c found and phase=Bound (156.588793ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6trv
STEP: Creating a pod to test subpath
Aug 26 01:02:18.372: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6trv" in namespace "provisioning-3203" to be "Succeeded or Failed"
Aug 26 01:02:18.528: INFO: Pod "pod-subpath-test-preprovisionedpv-6trv": Phase="Pending", Reason="", readiness=false. Elapsed: 155.918859ms
Aug 26 01:02:20.684: INFO: Pod "pod-subpath-test-preprovisionedpv-6trv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312636905s
Aug 26 01:02:22.841: INFO: Pod "pod-subpath-test-preprovisionedpv-6trv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.469095239s
STEP: Saw pod success
Aug 26 01:02:22.841: INFO: Pod "pod-subpath-test-preprovisionedpv-6trv" satisfied condition "Succeeded or Failed"
Aug 26 01:02:22.997: INFO: Trying to get logs from node ip-172-20-60-101.ap-northeast-2.compute.internal pod pod-subpath-test-preprovisionedpv-6trv container test-container-subpath-preprovisionedpv-6trv: <nil>
STEP: delete the pod
Aug 26 01:02:23.323: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6trv to disappear
Aug 26 01:02:23.479: INFO: Pod pod-subpath-test-preprovisionedpv-6trv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6trv
Aug 26 01:02:23.479: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6trv" in namespace "provisioning-3203"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":21,"skipped":121,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:25.605: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 119 lines ...
Aug 26 01:02:09.065: INFO: Waiting for pod aws-client to disappear
Aug 26 01:02:09.224: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Aug 26 01:02:09.224: INFO: Deleting PersistentVolumeClaim "pvc-j22lw"
Aug 26 01:02:09.383: INFO: Deleting PersistentVolume "aws-2dzcz"
Aug 26 01:02:09.892: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-054b5fb11c3948d2a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-054b5fb11c3948d2a is currently attached to i-096c5d8c993f8b9a9
	status code: 400, request id: 735fe929-66af-4b17-9370-19ebee2de7af
Aug 26 01:02:15.683: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-054b5fb11c3948d2a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-054b5fb11c3948d2a is currently attached to i-096c5d8c993f8b9a9
	status code: 400, request id: 83629ea0-2b3f-4d35-8c04-d05b47d006fd
Aug 26 01:02:21.444: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-054b5fb11c3948d2a", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-054b5fb11c3948d2a is currently attached to i-096c5d8c993f8b9a9
	status code: 400, request id: b10b0ed3-e3c5-48a6-a1fc-22d5e3df6856
Aug 26 01:02:27.228: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-054b5fb11c3948d2a".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:02:27.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2161" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":24,"skipped":170,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
Aug 26 01:02:27.582: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 47 lines ...
Aug 26 00:57:21.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Aug 26 00:57:21.814: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Aug 26 00:57:22.184: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3543" in namespace "provisioning-3543" to be "Succeeded or Failed"
Aug 26 00:57:22.403: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 218.923207ms
Aug 26 00:57:24.564: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379949177s
Aug 26 00:57:26.723: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539393762s
Aug 26 00:57:28.882: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 6.698522292s
Aug 26 00:57:31.042: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 8.858067588s
Aug 26 00:57:33.201: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 11.017318476s
... skipping 127 lines ...
Aug 26 01:02:09.858: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.674132081s
Aug 26 01:02:12.020: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.835800823s
Aug 26 01:02:14.181: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.997715574s
Aug 26 01:02:16.343: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.159540178s
Aug 26 01:02:18.505: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.32125444s
Aug 26 01:02:20.666: INFO: Pod "hostpath-symlink-prep-provisioning-3543": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.482731429s
Aug 26 01:02:22.667: FAIL: while waiting for hostPath init pod to succeed
Unexpected error:
    <*errors.errorString | 0xc0034ba7c0>: {
        s: "Gave up after waiting 5m0s for pod \"hostpath-symlink-prep-provisioning-3543\" to be \"Succeeded or Failed\"",
    }
    Gave up after waiting 5m0s for pod "hostpath-symlink-prep-provisioning-3543" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/drivers.(*hostPathSymlinkDriver).CreateVolume(0xc001cf4480, 0xc002e53380, 0x4c04356, 0xc, 0xc001cf4480, 0x4915a01)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:963 +0xb3d
k8s.io/kubernetes/test/e2e/storage/testsuites.CreateVolume(0x5349a20, 0xc001cf4480, 0xc002e53380, 0x4c04356, 0xc, 0x300, 0x6e)
... skipping 14 lines ...
	/usr/local/go/src/testing/testing.go:1168 +0x2b3
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "provisioning-3543".
STEP: Found 5 events.
Aug 26 01:02:22.830: INFO: At 2021-08-26 00:57:22 +0000 UTC - event for hostpath-symlink-prep-provisioning-3543: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 01:02:22.830: INFO: At 2021-08-26 00:59:12 +0000 UTC - event for hostpath-symlink-prep-provisioning-3543: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:02:22.830: INFO: At 2021-08-26 00:59:12 +0000 UTC - event for hostpath-symlink-prep-provisioning-3543: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:02:22.830: INFO: At 2021-08-26 00:59:12 +0000 UTC - event for hostpath-symlink-prep-provisioning-3543: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 01:02:22.830: INFO: At 2021-08-26 00:59:12 +0000 UTC - event for hostpath-symlink-prep-provisioning-3543: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:02:22.992: INFO: POD                                      NODE                                             PHASE    GRACE  CONDITIONS
Aug 26 01:02:22.992: INFO: hostpath-symlink-prep-provisioning-3543  ip-172-20-62-60.ap-northeast-2.compute.internal  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:57:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:57:22 +0000 UTC ContainersNotReady containers with unready status: [init-volume-provisioning-3543]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:57:22 +0000 UTC ContainersNotReady containers with unready status: [init-volume-provisioning-3543]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:57:22 +0000 UTC  }]
Aug 26 01:02:22.992: INFO: 
Aug 26 01:02:23.155: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 01:02:23.316: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 36091 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 135 lines ...
Aug 26 01:02:27.809: INFO: 	Container init-volume-provisioning-3543 ready: false, restart count 0
Aug 26 01:02:27.809: INFO: pod-subpath-test-dynamicpv-bpcb started at 2021-08-26 00:59:45 +0000 UTC (2+2 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Init container init-volume-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:27.809: INFO: 	Init container test-init-subpath-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:27.809: INFO: 	Container test-container-subpath-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:27.809: INFO: 	Container test-container-volume-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:02:27.809: INFO: fail-once-non-local-n99cw started at 2021-08-26 00:48:09 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Container c ready: false, restart count 0
Aug 26 01:02:27.809: INFO: csi-hostpath-attacher-0 started at 2021-08-26 00:48:39 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Container csi-attacher ready: true, restart count 0
Aug 26 01:02:27.809: INFO: deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7 started at 2021-08-26 01:01:18 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Container write-pod ready: false, restart count 0
Aug 26 01:02:27.809: INFO: csi-hostpath-provisioner-0 started at 2021-08-26 00:59:41 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Container csi-provisioner ready: true, restart count 0
Aug 26 01:02:27.809: INFO: csi-hostpath-snapshotter-0 started at 2021-08-26 01:01:35 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Container csi-snapshotter ready: true, restart count 0
Aug 26 01:02:27.809: INFO: fail-once-non-local-mgnnr started at 2021-08-26 00:48:09 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Container c ready: false, restart count 0
Aug 26 01:02:27.809: INFO: pod-subpath-test-preprovisionedpv-r55w started at 2021-08-26 00:59:50 +0000 UTC (2+2 container statuses recorded)
Aug 26 01:02:27.809: INFO: 	Init container init-volume-preprovisionedpv-r55w ready: false, restart count 0
Aug 26 01:02:27.809: INFO: 	Init container test-init-subpath-preprovisionedpv-r55w ready: false, restart count 0
Aug 26 01:02:27.809: INFO: 	Container test-container-subpath-preprovisionedpv-r55w ready: false, restart count 0
Aug 26 01:02:27.809: INFO: 	Container test-container-volume-preprovisionedpv-r55w ready: false, restart count 0
... skipping 38 lines ...
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202

      Aug 26 01:02:22.667: while waiting for hostPath init pod to succeed
      Unexpected error:
          <*errors.errorString | 0xc0034ba7c0>: {
              s: "Gave up after waiting 5m0s for pod \"hostpath-symlink-prep-provisioning-3543\" to be \"Succeeded or Failed\"",
          }
          Gave up after waiting 5m0s for pod "hostpath-symlink-prep-provisioning-3543" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:963
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":21,"skipped":232,"failed":2,"failures":["[sig-apps] Deployment deployment should support proportional scaling [Conformance]","[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory"]}
Aug 26 01:02:28.832: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-qxsf
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 01:02:11.790: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qxsf" in namespace "subpath-1108" to be "Succeeded or Failed"
Aug 26 01:02:11.950: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Pending", Reason="", readiness=false. Elapsed: 159.799009ms
Aug 26 01:02:14.109: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 2.318682762s
Aug 26 01:02:16.268: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 4.478018965s
Aug 26 01:02:18.427: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 6.637169659s
Aug 26 01:02:20.586: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 8.796394061s
Aug 26 01:02:22.751: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 10.960741053s
Aug 26 01:02:24.910: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 13.119789113s
Aug 26 01:02:27.069: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 15.278781752s
Aug 26 01:02:29.228: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 17.437840161s
Aug 26 01:02:31.387: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 19.596845323s
Aug 26 01:02:33.550: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Running", Reason="", readiness=true. Elapsed: 21.759581283s
Aug 26 01:02:35.708: INFO: Pod "pod-subpath-test-configmap-qxsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.918431085s
STEP: Saw pod success
Aug 26 01:02:35.709: INFO: Pod "pod-subpath-test-configmap-qxsf" satisfied condition "Succeeded or Failed"
Aug 26 01:02:35.867: INFO: Trying to get logs from node ip-172-20-61-11.ap-northeast-2.compute.internal pod pod-subpath-test-configmap-qxsf container test-container-subpath-configmap-qxsf: <nil>
STEP: delete the pod
Aug 26 01:02:36.191: INFO: Waiting for pod pod-subpath-test-configmap-qxsf to disappear
Aug 26 01:02:36.350: INFO: Pod pod-subpath-test-configmap-qxsf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-qxsf
Aug 26 01:02:36.350: INFO: Deleting pod "pod-subpath-test-configmap-qxsf" in namespace "subpath-1108"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":36,"skipped":192,"failed":0}
Aug 26 01:02:36.837: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":22,"skipped":123,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}
Aug 26 01:02:44.125: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":42,"skipped":303,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted"]}
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 01:02:20.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-downwardapi-mxbj
STEP: Creating a pod to test atomic-volume-subpath
Aug 26 01:02:21.356: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mxbj" in namespace "subpath-4712" to be "Succeeded or Failed"
Aug 26 01:02:21.516: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Pending", Reason="", readiness=false. Elapsed: 159.83371ms
Aug 26 01:02:23.676: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 2.319918788s
Aug 26 01:02:25.837: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 4.480221418s
Aug 26 01:02:27.997: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 6.640259162s
Aug 26 01:02:30.157: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 8.800277977s
Aug 26 01:02:32.317: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 10.960402037s
Aug 26 01:02:34.477: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 13.120544246s
Aug 26 01:02:36.637: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 15.280656607s
Aug 26 01:02:38.798: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 17.441884986s
Aug 26 01:02:40.958: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Running", Reason="", readiness=true. Elapsed: 19.601839737s
Aug 26 01:02:43.118: INFO: Pod "pod-subpath-test-downwardapi-mxbj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.761969452s
STEP: Saw pod success
Aug 26 01:02:43.119: INFO: Pod "pod-subpath-test-downwardapi-mxbj" satisfied condition "Succeeded or Failed"
Aug 26 01:02:43.278: INFO: Trying to get logs from node ip-172-20-62-163.ap-northeast-2.compute.internal pod pod-subpath-test-downwardapi-mxbj container test-container-subpath-downwardapi-mxbj: <nil>
STEP: delete the pod
Aug 26 01:02:43.605: INFO: Waiting for pod pod-subpath-test-downwardapi-mxbj to disappear
Aug 26 01:02:43.764: INFO: Pod pod-subpath-test-downwardapi-mxbj no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-mxbj
Aug 26 01:02:43.764: INFO: Deleting pod "pod-subpath-test-downwardapi-mxbj" in namespace "subpath-4712"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":303,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted"]}
Aug 26 01:02:44.252: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":16,"skipped":113,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","[sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]"]}
Aug 26 01:02:44.965: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:247.647 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":135,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
Aug 26 01:02:49.146: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 66 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should not deadlock when a pod's predecessor fails
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":26,"skipped":228,"failed":1,"failures":["[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]"]}
Aug 26 01:02:51.424: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
Aug 26 01:02:34.578: INFO: PersistentVolumeClaim pvc-rjr72 found and phase=Bound (15.244837682s)
Aug 26 01:02:34.578: INFO: Waiting up to 3m0s for PersistentVolume nfs-7wlpg to have phase Bound
Aug 26 01:02:34.733: INFO: PersistentVolume nfs-7wlpg found and phase=Bound (155.234214ms)
STEP: Checking pod has write access to PersistentVolume
Aug 26 01:02:35.044: INFO: Creating nfs test pod
Aug 26 01:02:35.200: INFO: Pod should terminate with exitcode 0 (success)
Aug 26 01:02:35.200: INFO: Waiting up to 5m0s for pod "pvc-tester-tgbhm" in namespace "pv-2891" to be "Succeeded or Failed"
Aug 26 01:02:35.356: INFO: Pod "pvc-tester-tgbhm": Phase="Pending", Reason="", readiness=false. Elapsed: 155.230904ms
Aug 26 01:02:37.511: INFO: Pod "pvc-tester-tgbhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.310908524s
STEP: Saw pod success
Aug 26 01:02:37.511: INFO: Pod "pvc-tester-tgbhm" satisfied condition "Succeeded or Failed"
Aug 26 01:02:37.511: INFO: Pod pvc-tester-tgbhm succeeded 
Aug 26 01:02:37.511: INFO: Deleting pod "pvc-tester-tgbhm" in namespace "pv-2891"
Aug 26 01:02:37.670: INFO: Wait up to 5m0s for pod "pvc-tester-tgbhm" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Aug 26 01:02:37.838: INFO: Deleting PVC pvc-rjr72 to trigger reclamation of PV 
Aug 26 01:02:37.838: INFO: Deleting PersistentVolumeClaim "pvc-rjr72"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":33,"skipped":293,"failed":1,"failures":["[k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]"]}
Aug 26 01:02:51.436: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:38.464 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":46,"skipped":347,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data"]}
Aug 26 01:02:57.402: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 37 lines ...
Aug 26 01:02:10.778: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-118 to register on node ip-172-20-60-101.ap-northeast-2.compute.internal
STEP: Creating pod
Aug 26 01:02:16.559: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 26 01:02:16.716: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-hg4xr] to have phase Bound
Aug 26 01:02:16.872: INFO: PersistentVolumeClaim pvc-hg4xr found and phase=Bound (156.02894ms)
STEP: checking for CSIInlineVolumes feature
Aug 26 01:02:23.963: INFO: Error getting logs for pod inline-volume-6wgzd: the server rejected our request for an unknown reason (get pods inline-volume-6wgzd)
Aug 26 01:02:23.963: INFO: Deleting pod "inline-volume-6wgzd" in namespace "csi-mock-volumes-118"
Aug 26 01:02:24.120: INFO: Wait up to 5m0s for pod "inline-volume-6wgzd" to be fully deleted
STEP: Deleting the previously created pod
Aug 26 01:02:30.433: INFO: Deleting pod "pvc-volume-tester-j8j9l" in namespace "csi-mock-volumes-118"
Aug 26 01:02:30.591: INFO: Wait up to 5m0s for pod "pvc-volume-tester-j8j9l" to be fully deleted
STEP: Checking CSI driver logs
Aug 26 01:02:33.060: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Aug 26 01:02:33.060: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Aug 26 01:02:33.060: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-j8j9l
Aug 26 01:02:33.060: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-118
Aug 26 01:02:33.060: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: e60e2ad3-89eb-4b67-aa8b-9686690b5039
Aug 26 01:02:33.060: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e60e2ad3-89eb-4b67-aa8b-9686690b5039/volumes/kubernetes.io~csi/pvc-a0beff72-b04c-47a7-a131-7edd6e23ae60/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-j8j9l
Aug 26 01:02:33.060: INFO: Deleting pod "pvc-volume-tester-j8j9l" in namespace "csi-mock-volumes-118"
STEP: Deleting claim pvc-hg4xr
Aug 26 01:02:33.527: INFO: Waiting up to 2m0s for PersistentVolume pvc-a0beff72-b04c-47a7-a131-7edd6e23ae60 to get deleted
Aug 26 01:02:33.682: INFO: PersistentVolume pvc-a0beff72-b04c-47a7-a131-7edd6e23ae60 found and phase=Released (155.179384ms)
Aug 26 01:02:35.838: INFO: PersistentVolume pvc-a0beff72-b04c-47a7-a131-7edd6e23ae60 found and phase=Released (2.310926349s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:308
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:358
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":29,"skipped":212,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source"]}
Aug 26 01:03:05.802: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:48:08.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
Aug 26 01:03:09.663: FAIL: failed to ensure job completion in namespace: job-8453
Unexpected error:
    <*errors.errorString | 0xc000176200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 9 lines ...
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1168 +0x2b3
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "job-8453".
STEP: Found 12 events.
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:48:09 +0000 UTC - event for fail-once-non-local: {job-controller } SuccessfulCreate: Created pod: fail-once-non-local-n99cw
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:48:09 +0000 UTC - event for fail-once-non-local: {job-controller } SuccessfulCreate: Created pod: fail-once-non-local-mgnnr
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:48:09 +0000 UTC - event for fail-once-non-local-n99cw: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:48:10 +0000 UTC - event for fail-once-non-local-mgnnr: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:20 +0000 UTC - event for fail-once-non-local-n99cw: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:20 +0000 UTC - event for fail-once-non-local-n99cw: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:20 +0000 UTC - event for fail-once-non-local-n99cw: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:20 +0000 UTC - event for fail-once-non-local-n99cw: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:31 +0000 UTC - event for fail-once-non-local-mgnnr: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:31 +0000 UTC - event for fail-once-non-local-mgnnr: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:32 +0000 UTC - event for fail-once-non-local-mgnnr: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 01:03:09.822: INFO: At 2021-08-26 00:50:32 +0000 UTC - event for fail-once-non-local-mgnnr: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:03:09.982: INFO: POD                        NODE                                             PHASE    GRACE  CONDITIONS
Aug 26 01:03:09.982: INFO: fail-once-non-local-mgnnr  ip-172-20-62-60.ap-northeast-2.compute.internal  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC ContainersNotReady containers with unready status: [c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC ContainersNotReady containers with unready status: [c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC  }]
Aug 26 01:03:09.982: INFO: fail-once-non-local-n99cw  ip-172-20-62-60.ap-northeast-2.compute.internal  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC ContainersNotReady containers with unready status: [c]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC ContainersNotReady containers with unready status: [c]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 00:48:09 +0000 UTC  }]
Aug 26 01:03:09.982: INFO: 
Aug 26 01:03:10.141: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 01:03:10.299: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 36091 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 26 01:03:10.300: INFO: 
Logging kubelet events for node ip-172-20-54-134.ap-northeast-2.compute.internal
... skipping 88 lines ...
Aug 26 01:03:14.648: INFO: 	Container agnhost-container ready: true, restart count 0
Aug 26 01:03:14.648: INFO: pod-subpath-test-dynamicpv-bpcb started at 2021-08-26 00:59:45 +0000 UTC (2+2 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Init container init-volume-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:03:14.648: INFO: 	Init container test-init-subpath-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:03:14.648: INFO: 	Container test-container-subpath-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:03:14.648: INFO: 	Container test-container-volume-dynamicpv-bpcb ready: false, restart count 0
Aug 26 01:03:14.648: INFO: fail-once-non-local-n99cw started at 2021-08-26 00:48:09 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Container c ready: false, restart count 0
Aug 26 01:03:14.648: INFO: csi-hostpath-attacher-0 started at 2021-08-26 00:59:40 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Container csi-attacher ready: true, restart count 0
Aug 26 01:03:14.648: INFO: csi-hostpathplugin-0 started at 2021-08-26 00:59:41 +0000 UTC (0+3 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Container hostpath ready: true, restart count 0
Aug 26 01:03:14.648: INFO: 	Container liveness-probe ready: true, restart count 0
Aug 26 01:03:14.648: INFO: 	Container node-driver-registrar ready: true, restart count 0
Aug 26 01:03:14.648: INFO: csi-hostpath-snapshotter-0 started at 2021-08-26 00:59:42 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Container csi-snapshotter ready: true, restart count 0
Aug 26 01:03:14.648: INFO: fail-once-non-local-mgnnr started at 2021-08-26 00:48:09 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Container c ready: false, restart count 0
Aug 26 01:03:14.648: INFO: csi-hostpath-attacher-0 started at 2021-08-26 00:48:39 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Container csi-attacher ready: true, restart count 0
Aug 26 01:03:14.648: INFO: deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7 started at 2021-08-26 01:01:18 +0000 UTC (0+1 container statuses recorded)
Aug 26 01:03:14.648: INFO: 	Container write-pod ready: false, restart count 0
Aug 26 01:03:14.648: INFO: csi-hostpath-provisioner-0 started at 2021-08-26 00:59:41 +0000 UTC (0+1 container statuses recorded)
... skipping 45 lines ...
STEP: Destroying namespace "job-8453" for this suite.


• Failure [907.384 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117

  Aug 26 01:03:09.663: failed to ensure job completion in namespace: job-8453
  Unexpected error:
      <*errors.errorString | 0xc000176200>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:136
------------------------------
{"msg":"FAILED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":7,"skipped":66,"failed":2,"failures":["[sig-apps] Deployment iterative rollouts should eventually progress","[sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted"]}
Aug 26 01:03:15.627: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 274 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should provide basic identity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:124
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":25,"skipped":216,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount"]}
Aug 26 01:04:08.506: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 44 lines ...
Aug 26 00:48:40.983: INFO: creating *v1.StatefulSet: ephemeral-509-2856/csi-hostpath-resizer
Aug 26 00:48:41.147: INFO: creating *v1.Service: ephemeral-509-2856/csi-hostpath-snapshotter
Aug 26 00:48:41.317: INFO: creating *v1.StatefulSet: ephemeral-509-2856/csi-hostpath-snapshotter
Aug 26 00:48:41.481: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-ephemeral-509
Aug 26 00:48:41.644: INFO: Creating resource for CSI ephemeral inline volume
STEP: checking the requested inline volume exists in the pod running on node {Name:ip-172-20-62-60.ap-northeast-2.compute.internal Selector:map[] Affinity:nil}
Aug 26 01:03:42.295: FAIL: waiting for pod with inline volume
Unexpected error:
    <*errors.errorString | 0xc0001f6200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 7 lines ...
k8s.io/kubernetes/test/e2e.TestE2E(0xc000f34180)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b
testing.tRunner(0xc000f34180, 0x4dec428)
	/usr/local/go/src/testing/testing.go:1123 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1168 +0x2b3
Aug 26 01:03:42.457: INFO: Error getting logs for pod inline-volume-tester-qd769: the server rejected our request for an unknown reason (get pods inline-volume-tester-qd769)
Aug 26 01:03:42.458: INFO: Deleting pod "inline-volume-tester-qd769" in namespace "ephemeral-509"
Aug 26 01:03:42.620: INFO: Wait up to 5m0s for pod "inline-volume-tester-qd769" to be fully deleted
STEP: deleting the test namespace: ephemeral-509
STEP: Waiting for namespaces [ephemeral-509] to vanish
STEP: uninstalling csi mock driver
Aug 26 01:03:51.265: INFO: deleting *v1.ServiceAccount: ephemeral-509-2856/csi-attacher
... skipping 325 lines ...
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support two pods which share the same volume [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:179

      Aug 26 01:03:42.295: waiting for pod with inline volume
      Unexpected error:
          <*errors.errorString | 0xc0001f6200>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:307
------------------------------
{"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":2,"skipped":20,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume"]}
Aug 26 01:04:14.704: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:247.861 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should *not* be restarted with a non-local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:249
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":27,"skipped":222,"failed":2,"failures":["[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","[sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
Aug 26 01:04:45.049: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":38,"skipped":209,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Aug 26 00:59:43.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: Gathering metrics
W0826 00:59:45.060114    4864 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Aug 26 01:04:45.375: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:04:45.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8190" for this suite.


• [SLOW TEST:302.580 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":39,"skipped":209,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}
Aug 26 01:04:45.702: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 18 lines ...
Aug 26 00:59:30.123: INFO: PersistentVolumeClaim pvc-nnxcr found but phase is Pending instead of Bound.
Aug 26 00:59:32.278: INFO: PersistentVolumeClaim pvc-nnxcr found but phase is Pending instead of Bound.
Aug 26 00:59:34.434: INFO: PersistentVolumeClaim pvc-nnxcr found and phase=Bound (10.933084937s)
Aug 26 00:59:34.434: INFO: Waiting up to 3m0s for PersistentVolume local-mwfws to have phase Bound
Aug 26 00:59:34.593: INFO: PersistentVolume local-mwfws found and phase=Bound (158.500089ms)
STEP: Creating pod
Aug 26 01:04:35.524: FAIL: Unexpected error:
    <*errors.errorString | 0xc0001ea200>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 25 lines ...
Aug 26 01:04:41.664: INFO: At 2021-08-26 00:59:20 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-w72fv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.20" already present on machine
Aug 26 01:04:41.664: INFO: At 2021-08-26 00:59:20 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-w72fv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Created: Created container agnhost-container
Aug 26 01:04:41.664: INFO: At 2021-08-26 00:59:20 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-w72fv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Started: Started container agnhost-container
Aug 26 01:04:41.664: INFO: At 2021-08-26 00:59:23 +0000 UTC - event for pvc-nnxcr: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volumemode-7662" not found
Aug 26 01:04:41.664: INFO: At 2021-08-26 00:59:34 +0000 UTC - event for pod-8e2c273c-0b12-463a-8895-6212dfb4e806: {default-scheduler } Scheduled: Successfully assigned volumemode-7662/pod-8e2c273c-0b12-463a-8895-6212dfb4e806 to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 01:04:41.664: INFO: At 2021-08-26 00:59:35 +0000 UTC - event for pod-8e2c273c-0b12-463a-8895-6212dfb4e806: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 01:04:41.664: INFO: At 2021-08-26 01:00:43 +0000 UTC - event for pod-8e2c273c-0b12-463a-8895-6212dfb4e806: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:04:41.664: INFO: At 2021-08-26 01:00:43 +0000 UTC - event for pod-8e2c273c-0b12-463a-8895-6212dfb4e806: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:04:41.664: INFO: At 2021-08-26 01:00:44 +0000 UTC - event for pod-8e2c273c-0b12-463a-8895-6212dfb4e806: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:04:41.664: INFO: At 2021-08-26 01:00:44 +0000 UTC - event for pod-8e2c273c-0b12-463a-8895-6212dfb4e806: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 01:04:41.664: INFO: At 2021-08-26 01:04:41 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-w72fv: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Killing: Stopping container agnhost-container
Aug 26 01:04:41.819: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 01:04:41.819: INFO: 
Aug 26 01:04:41.976: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
... skipping 136 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not mount / map unused volumes in a pod [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347

      Aug 26 01:04:35.524: Unexpected error:
          <*errors.errorString | 0xc0001ea200>: {
              s: "timed out waiting for the condition",
          }
          timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:380
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":32,"skipped":228,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}
Aug 26 01:04:47.294: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Aug 26 00:59:47.616: INFO: PersistentVolumeClaim pvc-kc5dz found but phase is Pending instead of Bound.
Aug 26 00:59:49.771: INFO: PersistentVolumeClaim pvc-kc5dz found and phase=Bound (6.619492264s)
Aug 26 00:59:49.771: INFO: Waiting up to 3m0s for PersistentVolume local-7q5cm to have phase Bound
Aug 26 00:59:49.929: INFO: PersistentVolume local-7q5cm found and phase=Bound (158.485752ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-r55w
STEP: Creating a pod to test subpath
Aug 26 00:59:50.395: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-r55w" in namespace "provisioning-5473" to be "Succeeded or Failed"
Aug 26 00:59:50.550: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 154.422333ms
Aug 26 00:59:52.705: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309342621s
Aug 26 00:59:54.860: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464595422s
Aug 26 00:59:57.015: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.619881798s
Aug 26 00:59:59.171: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.775098281s
Aug 26 01:00:01.325: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.929970709s
... skipping 128 lines ...
Aug 26 01:04:39.462: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.066762526s
Aug 26 01:04:41.619: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.223986595s
Aug 26 01:04:43.777: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.381138029s
Aug 26 01:04:45.934: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.538220588s
Aug 26 01:04:48.091: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.695219082s
Aug 26 01:04:50.248: INFO: Pod "pod-subpath-test-preprovisionedpv-r55w": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.852275395s
Aug 26 01:04:52.563: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-r55w" container "init-volume-preprovisionedpv-r55w": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-r55w)
Aug 26 01:04:52.721: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-r55w" container "test-init-subpath-preprovisionedpv-r55w": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-r55w)
Aug 26 01:04:52.878: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-r55w" container "test-container-subpath-preprovisionedpv-r55w": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-r55w)
Aug 26 01:04:53.036: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-preprovisionedpv-r55w" container "test-container-volume-preprovisionedpv-r55w": the server rejected our request for an unknown reason (get pods pod-subpath-test-preprovisionedpv-r55w)
STEP: delete the pod
Aug 26 01:04:53.195: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-r55w to disappear
Aug 26 01:04:53.352: INFO: Pod pod-subpath-test-preprovisionedpv-r55w still exists
Aug 26 01:04:55.352: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-r55w to disappear
Aug 26 01:04:55.510: INFO: Pod pod-subpath-test-preprovisionedpv-r55w no longer exists
Aug 26 01:04:55.510: FAIL: Unexpected error:
    <*errors.errorString | 0xc0030cc910>: {
        s: "expected pod \"pod-subpath-test-preprovisionedpv-r55w\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-preprovisionedpv-r55w\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-preprovisionedpv-r55w" success: Gave up after waiting 5m0s for pod "pod-subpath-test-preprovisionedpv-r55w" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc001b9d1e0, 0x4bf718b, 0x7, 0xc003b24400, 0x1, 0xc000e71130, 0x1, 0x1, 0x4df0110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 27 lines ...
Aug 26 01:04:57.344: INFO: At 2021-08-26 00:59:38 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-jmvtq: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.20" already present on machine
Aug 26 01:04:57.344: INFO: At 2021-08-26 00:59:38 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-jmvtq: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Created: Created container agnhost-container
Aug 26 01:04:57.344: INFO: At 2021-08-26 00:59:38 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-jmvtq: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Started: Started container agnhost-container
Aug 26 01:04:57.344: INFO: At 2021-08-26 00:59:42 +0000 UTC - event for pvc-kc5dz: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "provisioning-5473" not found
Aug 26 01:04:57.344: INFO: At 2021-08-26 00:59:50 +0000 UTC - event for pod-subpath-test-preprovisionedpv-r55w: {default-scheduler } Scheduled: Successfully assigned provisioning-5473/pod-subpath-test-preprovisionedpv-r55w to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 01:04:57.344: INFO: At 2021-08-26 00:59:51 +0000 UTC - event for pod-subpath-test-preprovisionedpv-r55w: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 01:04:57.344: INFO: At 2021-08-26 01:01:17 +0000 UTC - event for pod-subpath-test-preprovisionedpv-r55w: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:04:57.344: INFO: At 2021-08-26 01:01:17 +0000 UTC - event for pod-subpath-test-preprovisionedpv-r55w: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:04:57.344: INFO: At 2021-08-26 01:01:18 +0000 UTC - event for pod-subpath-test-preprovisionedpv-r55w: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:04:57.344: INFO: At 2021-08-26 01:01:18 +0000 UTC - event for pod-subpath-test-preprovisionedpv-r55w: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 01:04:57.344: INFO: At 2021-08-26 01:04:57 +0000 UTC - event for hostexec-ip-172-20-62-60.ap-northeast-2.compute.internal-jmvtq: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Killing: Stopping container agnhost-container
Aug 26 01:04:57.500: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 01:04:57.500: INFO: 
Aug 26 01:04:57.659: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202

      Aug 26 01:04:55.510: Unexpected error:
          <*errors.errorString | 0xc0030cc910>: {
              s: "expected pod \"pod-subpath-test-preprovisionedpv-r55w\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-preprovisionedpv-r55w\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-preprovisionedpv-r55w" success: Gave up after waiting 5m0s for pod "pod-subpath-test-preprovisionedpv-r55w" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":22,"skipped":168,"failed":3,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}
Aug 26 01:05:02.976: INFO: Running AfterSuite actions on all nodes
STEP: deleting the test namespace: volume-expand-4755
STEP: uninstalling csi mock driver
Aug 26 01:05:02.978: INFO: deleting *v1.ServiceAccount: volume-expand-4755-1193/csi-attacher
Aug 26 01:05:03.135: INFO: deleting *v1.ClusterRole: external-attacher-runner-volume-expand-4755
Aug 26 01:05:03.293: INFO: deleting *v1.ClusterRoleBinding: csi-attacher-role-volume-expand-4755
... skipping 52 lines ...
Aug 26 01:00:02.430: INFO: PersistentVolumeClaim pvc-7bsvs found but phase is Pending instead of Bound.
Aug 26 01:00:04.590: INFO: PersistentVolumeClaim pvc-7bsvs found and phase=Bound (6.637564946s)
Aug 26 01:00:04.590: INFO: Waiting up to 3m0s for PersistentVolume aws-8j2zh to have phase Bound
Aug 26 01:00:04.749: INFO: PersistentVolume aws-8j2zh found and phase=Bound (158.983399ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-hkrd
STEP: Creating a pod to test exec-volume-test
Aug 26 01:00:05.227: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-hkrd" in namespace "volume-9185" to be "Succeeded or Failed"
Aug 26 01:00:05.386: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 158.919779ms
Aug 26 01:00:07.546: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318458908s
Aug 26 01:00:09.705: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477838808s
Aug 26 01:00:11.864: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.63726996s
Aug 26 01:00:14.031: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803677087s
Aug 26 01:00:16.190: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.963322225s
... skipping 127 lines ...
Aug 26 01:04:52.642: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.415142797s
Aug 26 01:04:54.802: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.57456042s
Aug 26 01:04:56.961: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.733719021s
Aug 26 01:04:59.120: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.893235378s
Aug 26 01:05:01.280: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.0527246s
Aug 26 01:05:03.440: INFO: Pod "exec-volume-test-preprovisionedpv-hkrd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.21344555s
Aug 26 01:05:05.761: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "exec-volume-test-preprovisionedpv-hkrd" container "exec-container-preprovisionedpv-hkrd": the server rejected our request for an unknown reason (get pods exec-volume-test-preprovisionedpv-hkrd)
STEP: delete the pod
Aug 26 01:05:05.921: INFO: Waiting for pod exec-volume-test-preprovisionedpv-hkrd to disappear
Aug 26 01:05:06.081: INFO: Pod exec-volume-test-preprovisionedpv-hkrd still exists
Aug 26 01:05:08.081: INFO: Waiting for pod exec-volume-test-preprovisionedpv-hkrd to disappear
Aug 26 01:05:08.240: INFO: Pod exec-volume-test-preprovisionedpv-hkrd still exists
Aug 26 01:05:10.081: INFO: Waiting for pod exec-volume-test-preprovisionedpv-hkrd to disappear
Aug 26 01:05:10.240: INFO: Pod exec-volume-test-preprovisionedpv-hkrd no longer exists
Aug 26 01:05:10.240: FAIL: Unexpected error:
    <*errors.errorString | 0xc00275b2f0>: {
        s: "expected pod \"exec-volume-test-preprovisionedpv-hkrd\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-preprovisionedpv-hkrd\" to be \"Succeeded or Failed\"",
    }
    expected pod "exec-volume-test-preprovisionedpv-hkrd" success: Gave up after waiting 5m0s for pod "exec-volume-test-preprovisionedpv-hkrd" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00264e160, 0x4c17258, 0x10, 0xc0013db400, 0x0, 0xc001001128, 0x1, 0x1, 0x4df0110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 10 lines ...
	/usr/local/go/src/testing/testing.go:1123 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1168 +0x2b3
STEP: Deleting pv and pvc
Aug 26 01:05:10.241: INFO: Deleting PersistentVolumeClaim "pvc-7bsvs"
Aug 26 01:05:10.401: INFO: Deleting PersistentVolume "aws-8j2zh"
Aug 26 01:05:11.384: INFO: Couldn't delete PD "aws://ap-northeast-2a/vol-0d60f03c723382dde", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d60f03c723382dde is currently attached to i-039ba835ddc4f059b
	status code: 400, request id: cff09c44-733f-4a79-a4df-e89ba5531038
Aug 26 01:05:17.189: INFO: Successfully deleted PD "aws://ap-northeast-2a/vol-0d60f03c723382dde".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "volume-9185".
STEP: Found 8 events.
Aug 26 01:05:17.348: INFO: At 2021-08-26 00:59:57 +0000 UTC - event for pvc-7bsvs: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volume-9185" not found
Aug 26 01:05:17.348: INFO: At 2021-08-26 01:00:05 +0000 UTC - event for exec-volume-test-preprovisionedpv-hkrd: {default-scheduler } Scheduled: Successfully assigned volume-9185/exec-volume-test-preprovisionedpv-hkrd to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 01:05:17.348: INFO: At 2021-08-26 01:00:07 +0000 UTC - event for exec-volume-test-preprovisionedpv-hkrd: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "aws-8j2zh" 
Aug 26 01:05:17.348: INFO: At 2021-08-26 01:00:15 +0000 UTC - event for exec-volume-test-preprovisionedpv-hkrd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/nginx:1.14-alpine"
Aug 26 01:05:17.348: INFO: At 2021-08-26 01:01:40 +0000 UTC - event for exec-volume-test-preprovisionedpv-hkrd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/nginx:1.14-alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:1.14-alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:05:17.348: INFO: At 2021-08-26 01:01:40 +0000 UTC - event for exec-volume-test-preprovisionedpv-hkrd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:05:17.348: INFO: At 2021-08-26 01:01:41 +0000 UTC - event for exec-volume-test-preprovisionedpv-hkrd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/nginx:1.14-alpine"
Aug 26 01:05:17.348: INFO: At 2021-08-26 01:01:41 +0000 UTC - event for exec-volume-test-preprovisionedpv-hkrd: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:05:17.507: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Aug 26 01:05:17.507: INFO: 
Aug 26 01:05:17.668: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 01:05:17.827: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 36091 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 01:01:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Aug 26 01:05:17.827: INFO: 
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Aug 26 01:05:10.241: Unexpected error:
          <*errors.errorString | 0xc00275b2f0>: {
              s: "expected pod \"exec-volume-test-preprovisionedpv-hkrd\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-preprovisionedpv-hkrd\" to be \"Succeeded or Failed\"",
          }
          expected pod "exec-volume-test-preprovisionedpv-hkrd" success: Gave up after waiting 5m0s for pod "exec-volume-test-preprovisionedpv-hkrd" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":41,"skipped":252,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume"]}
Aug 26 01:05:23.039: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 50 lines ...
Aug 26 00:59:42.498: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 26 00:59:42.657: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathflhnr] to have phase Bound
Aug 26 00:59:42.815: INFO: PersistentVolumeClaim csi-hostpathflhnr found but phase is Pending instead of Bound.
Aug 26 00:59:44.974: INFO: PersistentVolumeClaim csi-hostpathflhnr found and phase=Bound (2.316426189s)
STEP: Creating pod pod-subpath-test-dynamicpv-bpcb
STEP: Creating a pod to test subpath
Aug 26 00:59:45.456: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-bpcb" in namespace "provisioning-3860" to be "Succeeded or Failed"
Aug 26 00:59:45.614: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 158.0436ms
Aug 26 00:59:47.773: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31658057s
Aug 26 00:59:49.935: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.479051739s
Aug 26 00:59:52.093: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637444351s
Aug 26 00:59:54.252: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796028645s
Aug 26 00:59:56.410: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.954494398s
... skipping 127 lines ...
Aug 26 01:04:32.806: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.349872994s
Aug 26 01:04:34.964: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.508352698s
Aug 26 01:04:37.123: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.666895414s
Aug 26 01:04:39.281: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.825427464s
Aug 26 01:04:41.440: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.98397423s
Aug 26 01:04:43.598: INFO: Pod "pod-subpath-test-dynamicpv-bpcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.142341829s
Aug 26 01:04:45.917: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-dynamicpv-bpcb" container "init-volume-dynamicpv-bpcb": the server rejected our request for an unknown reason (get pods pod-subpath-test-dynamicpv-bpcb)
Aug 26 01:04:46.076: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-dynamicpv-bpcb" container "test-init-subpath-dynamicpv-bpcb": the server rejected our request for an unknown reason (get pods pod-subpath-test-dynamicpv-bpcb)
Aug 26 01:04:46.235: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-dynamicpv-bpcb" container "test-container-subpath-dynamicpv-bpcb": the server rejected our request for an unknown reason (get pods pod-subpath-test-dynamicpv-bpcb)
Aug 26 01:04:46.394: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-dynamicpv-bpcb" container "test-container-volume-dynamicpv-bpcb": the server rejected our request for an unknown reason (get pods pod-subpath-test-dynamicpv-bpcb)
STEP: delete the pod
Aug 26 01:04:46.554: INFO: Waiting for pod pod-subpath-test-dynamicpv-bpcb to disappear
Aug 26 01:04:46.712: INFO: Pod pod-subpath-test-dynamicpv-bpcb still exists
Aug 26 01:04:48.713: INFO: Waiting for pod pod-subpath-test-dynamicpv-bpcb to disappear
Aug 26 01:04:48.871: INFO: Pod pod-subpath-test-dynamicpv-bpcb no longer exists
Aug 26 01:04:48.871: FAIL: Unexpected error:
    <*errors.errorString | 0xc003347350>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-bpcb\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-bpcb\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-bpcb" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-bpcb" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0011842c0, 0x4bf718b, 0x7, 0xc002058400, 0x1, 0xc002579130, 0x1, 0x1, 0x4df0110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 283 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202

      Aug 26 01:04:48.871: Unexpected error:
          <*errors.errorString | 0xc003347350>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-bpcb\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-bpcb\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-dynamicpv-bpcb" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-bpcb" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":30,"skipped":226,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}
Aug 26 01:05:24.844: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-storage] Mounted volume expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 157 lines ...
Aug 26 01:06:04.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 01:06:06.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 01:06:08.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 01:06:10.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 01:06:12.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 01:06:13.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 26 01:06:13.117: FAIL: Failed creating deployment deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" failed to complete: error waiting for deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
Unexpected error:
    <*errors.errorString | 0xc0033bb2b0>: {
        s: "deployment \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef\" failed to complete: error waiting for deployment \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" failed to complete: error waiting for deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func16.4()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:124 +0x27a
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002ab4180)
... skipping 12 lines ...
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:01:12 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895: {replicaset-controller } SuccessfulCreate: Created pod: deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:01:12 +0000 UTC - event for pvc-6p5mz: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:01:17 +0000 UTC - event for pvc-6p5mz: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-38abda10-bf29-4a5d-a0c1-a6477b9ed437 using kubernetes.io/aws-ebs
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:01:18 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7: {default-scheduler } Scheduled: Successfully assigned mounted-volume-expand-5815/deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7 to ip-172-20-62-60.ap-northeast-2.compute.internal
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:01:21 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-38abda10-bf29-4a5d-a0c1-a6477b9ed437" 
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:01:23 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Pulling: Pulling image "docker.io/library/busybox:1.29"
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:02:49 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ErrImagePull
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:02:49 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} BackOff: Back-off pulling image "docker.io/library/busybox:1.29"
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:02:49 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Error: ImagePullBackOff
Aug 26 01:06:13.277: INFO: At 2021-08-26 01:02:49 +0000 UTC - event for deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7: {kubelet ip-172-20-62-60.ap-northeast-2.compute.internal} Failed: Failed to pull image "docker.io/library/busybox:1.29": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.29": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Aug 26 01:06:13.435: INFO: POD                                                              NODE                                             PHASE    GRACE  CONDITIONS
Aug 26 01:06:13.435: INFO: deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895r58q7  ip-172-20-62-60.ap-northeast-2.compute.internal  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 01:01:18 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 01:01:18 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-08-26 01:01:18 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-08-26 01:01:18 +0000 UTC  }]
Aug 26 01:06:13.436: INFO: 
Aug 26 01:06:13.596: INFO: 
Logging node info for node ip-172-20-54-134.ap-northeast-2.compute.internal
Aug 26 01:06:13.754: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-54-134.ap-northeast-2.compute.internal   /api/v1/nodes/ip-172-20-54-134.ap-northeast-2.compute.internal d1b6be42-8818-48dc-b616-baf889602744 39554 0 2021-08-26 00:35:39 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/instancegroup:master-ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-54-134.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-08-26 00:35:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:attachable-volumes-aws-ebs":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {protokube Update v1 2021-08-26 00:35:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-08-26 00:36:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kops-controller Update v1 2021-08-26 00:36:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{}}}}}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-03339c6d52145655a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50531540992 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3889463296 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-aws-ebs: {{25 0} {<nil>} 25 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45478386818 0} {<nil>} 45478386818 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3784605696 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-08-26 00:35:59 +0000 UTC,LastTransitionTime:2021-08-26 00:35:59 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-08-26 01:06:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-08-26 01:06:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-08-26 01:06:09 +0000 UTC,LastTransitionTime:2021-08-26 00:35:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 01:06:09 +0000 UTC,LastTransitionTime:2021-08-26 00:36:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.54.134,},NodeAddress{Type:ExternalIP,Address:52.78.44.227,},NodeAddress{Type:Hostname,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-54-134.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-78-44-227.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2737d4c327a88bd579dc44db942619,SystemUUID:ec2737d4-c327-a88b-d579-dc44db942619,BootID:1efe01fa-70ee-4408-b748-fedd9db77251,KernelVersion:4.19.0-17-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:containerd://1.4.9,KubeletVersion:v1.19.14,KubeProxyVersion:v1.19.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcdadm/etcd-manager@sha256:17c07a22ebd996b93f6484437c684244219e325abeb70611cbaceb78c0f2d5d4 k8s.gcr.io/etcdadm/etcd-manager:3.0.20210707],SizeBytes:172004323,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.19.14],SizeBytes:120125746,},ContainerImage{Names:[k8s.gcr.io/kops/dns-controller:1.22.0-alpha.2],SizeBytes:114167308,},ContainerImage{Names:[k8s.gcr.io/kops/kops-controller:1.22.0-alpha.2],SizeBytes:113225234,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.19.14],SizeBytes:112093508,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.19.14],SizeBytes:100748383,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.19.14],SizeBytes:47749423,},ContainerImage{Names:[k8s.gcr.io/kops/kube-apiserver-healthcheck:1.22.0-alpha.2],SizeBytes:25622039,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 108 lines ...
• Failure [308.345 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:116

  Aug 26 01:06:13.117: Failed creating deployment deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" failed to complete: error waiting for deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
  Unexpected error:
      <*errors.errorString | 0xc0033bb2b0>: {
          s: "deployment \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef\" failed to complete: error waiting for deployment \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
      }
      deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" failed to complete: error waiting for deployment "deployment-0b9e65f8-5240-477a-8cbd-6033c808caef" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63765536472, loc:(*time.Location)(0x7718ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"deployment-0b9e65f8-5240-477a-8cbd-6033c808caef-696b497895\" is progressing."}}, CollisionCount:(*int32)(nil)}
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:124
------------------------------
{"msg":"FAILED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":29,"skipped":267,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","[sig-storage] Mounted volume expand Should verify mounted devices can be resized"]}
Aug 26 01:06:19.246: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0826 01:02:11.926765    4778 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Aug 26 01:07:12.249: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Aug 26 01:07:12.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1438" for this suite.


• [SLOW TEST:308.570 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":18,"skipped":117,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}
Aug 26 01:07:12.582: INFO: Running AfterSuite actions on all nodes


[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 49 lines ...
STEP: creating a claim
Aug 26 01:01:36.359: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Aug 26 01:01:36.526: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathqkfbt] to have phase Bound
Aug 26 01:01:36.687: INFO: PersistentVolumeClaim csi-hostpathqkfbt found and phase=Bound (161.150846ms)
STEP: Creating pod pod-subpath-test-dynamicpv-ttzr
STEP: Creating a pod to test subpath
Aug 26 01:01:37.182: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ttzr" in namespace "provisioning-3254" to be "Succeeded or Failed"
Aug 26 01:01:37.341: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 159.172946ms
Aug 26 01:01:39.500: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318611786s
Aug 26 01:01:41.660: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478206392s
Aug 26 01:01:43.819: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.637831817s
Aug 26 01:01:45.979: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.797660771s
Aug 26 01:01:48.139: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.957193902s
... skipping 127 lines ...
Aug 26 01:06:24.610: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.428336064s
Aug 26 01:06:26.770: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.588466347s
Aug 26 01:06:28.930: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.747933381s
Aug 26 01:06:31.089: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.907507568s
Aug 26 01:06:33.249: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.067230976s
Aug 26 01:06:35.409: INFO: Pod "pod-subpath-test-dynamicpv-ttzr": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.226935036s
Aug 26 01:06:37.774: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-dynamicpv-ttzr" container "init-volume-dynamicpv-ttzr": the server rejected our request for an unknown reason (get pods pod-subpath-test-dynamicpv-ttzr)
Aug 26 01:06:37.934: INFO: Failed to get logs from node "ip-172-20-62-60.ap-northeast-2.compute.internal" pod "pod-subpath-test-dynamicpv-ttzr" container "test-container-subpath-dynamicpv-ttzr": the server rejected our request for an unknown reason (get pods pod-subpath-test-dynamicpv-ttzr)
STEP: delete the pod
Aug 26 01:06:38.095: INFO: Waiting for pod pod-subpath-test-dynamicpv-ttzr to disappear
Aug 26 01:06:38.254: INFO: Pod pod-subpath-test-dynamicpv-ttzr still exists
Aug 26 01:06:40.254: INFO: Waiting for pod pod-subpath-test-dynamicpv-ttzr to disappear
Aug 26 01:06:40.414: INFO: Pod pod-subpath-test-dynamicpv-ttzr still exists
Aug 26 01:06:42.254: INFO: Waiting for pod pod-subpath-test-dynamicpv-ttzr to disappear
Aug 26 01:06:42.414: INFO: Pod pod-subpath-test-dynamicpv-ttzr no longer exists
Aug 26 01:06:42.414: FAIL: Unexpected error:
    <*errors.errorString | 0xc0030ce180>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-ttzr\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-ttzr\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-ttzr" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-ttzr" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0012b7080, 0x4bf718b, 0x7, 0xc002b3e400, 0x0, 0xc002785120, 0x1, 0x1, 0x4df0110)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725 +0x1ee
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 250 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing single file [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Aug 26 01:06:42.414: Unexpected error:
          <*errors.errorString | 0xc0030ce180>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-ttzr\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-ttzr\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-dynamicpv-ttzr" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-ttzr" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725
------------------------------
{"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":17,"skipped":96,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
Aug 26 01:07:12.741: INFO: Running AfterSuite actions on all nodes


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":21,"skipped":177,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume"]}
Aug 26 01:04:05.513: INFO: Running AfterSuite actions on all nodes
Aug 26 01:07:12.818: INFO: Running AfterSuite actions on node 1
Aug 26 01:07:12.818: INFO: Dumping logs locally to: /logs/artifacts/bdfb00d4-0604-11ec-99f5-c2ede4b31aac
Aug 26 01:07:12.819: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory



Summarizing 38 Failures:

[Fail] [k8s.io] InitContainer [NodeConformance] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:547

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology [It] should provision a volume and schedule a pod with AllowedTopologies 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:180

[Fail] [sig-node] Downward API [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support file as subpath [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:539

[Fail] [k8s.io] [sig-node] Security Context [It] should support container.SecurityContext.RunAsUser [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-apps] Deployment [It] iterative rollouts should eventually progress 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:648

[Fail] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath [It] should support existing directories when readOnly specified in the volumeSource 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:963

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:539

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support readOnly file specified in the volumeMount [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [It] Verify if offline PVC expansion works 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:456

[Fail] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath [It] should be able to unmount after the subpath directory is deleted 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:459

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should be able to unmount after the subpath directory is deleted 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:459

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support readOnly directory specified in the volumeMount 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support file as subpath [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-apps] Deployment [It] deployment should support proportional scaling [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:736

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support existing directory 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support file as subpath [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support existing single file [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode [It] should not mount / map unused volumes in a pod [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:380

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand [It] should resize volume when PVC is edited while pod is using it 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:253

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support readOnly file specified in the volumeMount [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode [It] should not mount / map unused volumes in a pod [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:380

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] should implement legacy replacement when the update strategy is OnDelete 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:58

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:539

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral [It] should create read/write inline ephemeral volume 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:307

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning [It] should provision storage with pvc data source 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:539

[Fail] [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath [It] should support existing single file [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support existing directory 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath [It] should support existing directory 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:963

[Fail] [sig-apps] Job [It] should run a job to completion when tasks sometimes fail and are not locally restarted 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:136

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral [It] should support two pods which share the same volume 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:307

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode [It] should not mount / map unused volumes in a pod [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:380

[Fail] [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath [It] should support existing directory 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes [It] should allow exec of files on the volume 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath [It] should support existing directory 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

[Fail] [sig-storage] Mounted volume expand [It] Should verify mounted devices can be resized 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:124

[Fail] [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath [It] should support existing single file [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:725

Ran 688 of 5484 Specs in 1577.787 seconds
FAIL! -- 650 Passed | 38 Failed | 0 Pending | 4796 Skipped


Ginkgo ran 1 suite in 26m27.53272084s
Test Suite Failed
F0826 01:07:12.865013    4201 tester.go:389] failed to run ginkgo tester: exit status 1
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0x1)
	/home/prow/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:1026 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x1c0e160, 0x3, {0x0, 0x0}, 0xc0001ee0e0, 0x0, {0x1604ce8, 0xc000670000}, 0x0, 0x0)
	/home/prow/go/pkg/mod/k8s.io/klog/v2@v2.9.0/klog.go:975 +0x63d
k8s.io/klog/v2.(*loggingT).printf(0xc00006e738, 0x46045b, {0x0, 0x0}, {0x0, 0x0}, {0x11bcff8, 0x1f}, {0xc000670000, 0x1, ...})
... skipping 1497 lines ...
route-table:rtb-0fb62fac9508dba0c	ok
vpc:vpc-0e5e90a5a884f8a74	ok
dhcp-options:dopt-012ca518db9e1c48c	ok
Deleted kubectl config for e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io

Deleted cluster: "e2e-cd674f4d3c-26e8c.test-cncf-aws.k8s.io"
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace