This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-06-05 00:49
Elapsed29m54s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0605 00:50:04.273372    4046 up.go:43] Cleaning up any leaked resources from previous cluster
I0605 00:50:04.273400    4046 dumplogs.go:38] /logs/artifacts/ce77e131-c597-11eb-a95e-22b332769ebd/kops toolbox dump --name e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I0605 00:50:04.288777    4065 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0605 00:50:04.288873    4065 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-2636771260-f3fa8.test-cncf-aws.k8s.io" not found
W0605 00:50:04.801572    4046 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0605 00:50:04.801619    4046 down.go:48] /logs/artifacts/ce77e131-c597-11eb-a95e-22b332769ebd/kops delete cluster --name e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --yes
I0605 00:50:04.817121    4076 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0605 00:50:04.817361    4076 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-2636771260-f3fa8.test-cncf-aws.k8s.io" not found
I0605 00:50:05.299710    4046 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/06/05 00:50:05 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0605 00:50:05.307376    4046 http.go:37] curl https://ip.jsb.workers.dev
I0605 00:50:05.398882    4046 up.go:144] /logs/artifacts/ce77e131-c597-11eb-a95e-22b332769ebd/kops create cluster --name e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.20.7 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=309956199498/RHEL-8.3_HVM-20210209-x86_64-0-Hourly2-GP2 --channel=alpha --networking=cilium --container-runtime=docker --admin-access 35.225.211.47/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones us-west-1a --master-size c5.large
I0605 00:50:05.414012    4087 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0605 00:50:05.414106    4087 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0605 00:50:05.462860    4087 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0605 00:50:06.046471    4087 new_cluster.go:1022]  Cloud Provider ID = aws
... skipping 41 lines ...

I0605 00:50:30.583658    4046 up.go:181] /logs/artifacts/ce77e131-c597-11eb-a95e-22b332769ebd/kops validate cluster --name e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0605 00:50:30.604122    4108 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0605 00:50:30.604220    4108 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-2636771260-f3fa8.test-cncf-aws.k8s.io

W0605 00:50:31.793747    4108 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:50:41.843821    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:50:51.888733    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:51:01.919523    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:51:11.966830    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:51:22.009575    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:51:32.040149    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:51:42.071923    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:51:52.106225    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:52:02.137607    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:52:12.174173    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:52:22.209496    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:52:32.239173    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:52:42.271918    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:52:52.306487    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:53:02.340264    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:53:12.373325    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:53:22.409745    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:53:32.442279    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:53:42.479526    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:53:52.523448    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:54:02.554852    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:54:12.585753    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:54:22.630687    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:54:32.666921    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:54:42.712310    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:54:52.746111    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:55:02.791071    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:55:12.821804    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:55:22.851063    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:55:32.886381    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:55:42.915644    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:55:52.943917    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:56:02.988048    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0605 00:56:13.025135    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 9 lines ...
Machine	i-05d5f0c0f1f573a5d				machine "i-05d5f0c0f1f573a5d" has not yet joined cluster
Pod	kube-system/cilium-kr4s6			system-node-critical pod "cilium-kr4s6" is not ready (cilium-agent)
Pod	kube-system/cilium-operator-79f9ffb4-vnd5j	system-cluster-critical pod "cilium-operator-79f9ffb4-vnd5j" is not ready (cilium-operator)
Pod	kube-system/coredns-8f5559c9b-mmsj6		system-cluster-critical pod "coredns-8f5559c9b-mmsj6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6gfkx	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6gfkx" is pending

Validation Failed
W0605 00:56:24.970323    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 14 lines ...
Pod	kube-system/cilium-hmvm8			system-node-critical pod "cilium-hmvm8" is pending
Pod	kube-system/cilium-kr4s6			system-node-critical pod "cilium-kr4s6" is not ready (cilium-agent)
Pod	kube-system/cilium-tzlgz			system-node-critical pod "cilium-tzlgz" is pending
Pod	kube-system/coredns-8f5559c9b-mmsj6		system-cluster-critical pod "coredns-8f5559c9b-mmsj6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6gfkx	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6gfkx" is pending

Validation Failed
W0605 00:56:36.238142    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 16 lines ...
Pod	kube-system/cilium-hmvm8			system-node-critical pod "cilium-hmvm8" is not ready (cilium-agent)
Pod	kube-system/cilium-kr4s6			system-node-critical pod "cilium-kr4s6" is not ready (cilium-agent)
Pod	kube-system/cilium-tzlgz			system-node-critical pod "cilium-tzlgz" is not ready (cilium-agent)
Pod	kube-system/coredns-8f5559c9b-mmsj6		system-cluster-critical pod "coredns-8f5559c9b-mmsj6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6gfkx	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6gfkx" is pending

Validation Failed
W0605 00:56:47.496646    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 12 lines ...
Pod	kube-system/cilium-hmvm8			system-node-critical pod "cilium-hmvm8" is not ready (cilium-agent)
Pod	kube-system/cilium-kr4s6			system-node-critical pod "cilium-kr4s6" is not ready (cilium-agent)
Pod	kube-system/cilium-tzlgz			system-node-critical pod "cilium-tzlgz" is not ready (cilium-agent)
Pod	kube-system/coredns-8f5559c9b-mmsj6		system-cluster-critical pod "coredns-8f5559c9b-mmsj6" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-6gfkx	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-6gfkx" is pending

Validation Failed
W0605 00:56:58.728642    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 11 lines ...
Pod	kube-system/cilium-fmbqf		system-node-critical pod "cilium-fmbqf" is not ready (cilium-agent)
Pod	kube-system/cilium-hmvm8		system-node-critical pod "cilium-hmvm8" is not ready (cilium-agent)
Pod	kube-system/cilium-tzlgz		system-node-critical pod "cilium-tzlgz" is not ready (cilium-agent)
Pod	kube-system/coredns-8f5559c9b-7sf75	system-cluster-critical pod "coredns-8f5559c9b-7sf75" is pending
Pod	kube-system/coredns-8f5559c9b-mmsj6	system-cluster-critical pod "coredns-8f5559c9b-mmsj6" is not ready (coredns)

Validation Failed
W0605 00:57:09.889958    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME				MESSAGE
Pod	kube-system/cilium-hmvm8	system-node-critical pod "cilium-hmvm8" is not ready (cilium-agent)
Pod	kube-system/cilium-tzlgz	system-node-critical pod "cilium-tzlgz" is not ready (cilium-agent)

Validation Failed
W0605 00:57:21.057133    4108 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-us-west-1a	Master	c5.large	1	1	us-west-1a
nodes-us-west-1a	Node	t3.medium	4	4	us-west-1a

... skipping 1143 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 219 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 00:59:40.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4595" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 00:59:42.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9725" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:42.960: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:7.148 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:222
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:45.254: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
Jun  5 00:59:38.318: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Jun  5 00:59:38.509: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9672294c-0a9d-4e20-b432-b82457c45227" in namespace "security-context-test-9447" to be "Succeeded or Failed"
Jun  5 00:59:38.587: INFO: Pod "alpine-nnp-false-9672294c-0a9d-4e20-b432-b82457c45227": Phase="Pending", Reason="", readiness=false. Elapsed: 78.149085ms
Jun  5 00:59:40.684: INFO: Pod "alpine-nnp-false-9672294c-0a9d-4e20-b432-b82457c45227": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174692545s
Jun  5 00:59:42.736: INFO: Pod "alpine-nnp-false-9672294c-0a9d-4e20-b432-b82457c45227": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226936207s
Jun  5 00:59:44.789: INFO: Pod "alpine-nnp-false-9672294c-0a9d-4e20-b432-b82457c45227": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.279943143s
Jun  5 00:59:44.789: INFO: Pod "alpine-nnp-false-9672294c-0a9d-4e20-b432-b82457c45227" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 00:59:45.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9447" for this suite.


... skipping 23 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:45.277: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 44 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Jun  5 00:59:38.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6" in namespace "projected-7426" to be "Succeeded or Failed"
Jun  5 00:59:38.306: INFO: Pod "downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 52.216826ms
Jun  5 00:59:40.358: INFO: Pod "downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104840916s
Jun  5 00:59:42.411: INFO: Pod "downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157897323s
Jun  5 00:59:44.463: INFO: Pod "downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210123162s
Jun  5 00:59:46.522: INFO: Pod "downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.268892165s
STEP: Saw pod success
Jun  5 00:59:46.522: INFO: Pod "downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6" satisfied condition "Succeeded or Failed"
Jun  5 00:59:46.575: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6 container client-container: <nil>
STEP: delete the pod
Jun  5 00:59:46.708: INFO: Waiting for pod downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6 to disappear
Jun  5 00:59:46.760: INFO: Pod downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.997 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:46.902: INFO: Driver local doesn't support ntfs -- skipping
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:47.344: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 69 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:988
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1033
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":1,"skipped":8,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:51.432: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 146 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 112 lines ...
Jun  5 00:59:47.872: INFO: PersistentVolumeClaim pvc-skhm9 found but phase is Pending instead of Bound.
Jun  5 00:59:49.924: INFO: PersistentVolumeClaim pvc-skhm9 found and phase=Bound (2.103898245s)
Jun  5 00:59:49.924: INFO: Waiting up to 3m0s for PersistentVolume local-c6j7g to have phase Bound
Jun  5 00:59:49.976: INFO: PersistentVolume local-c6j7g found and phase=Bound (51.664552ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n5wg
STEP: Creating a pod to test subpath
Jun  5 00:59:50.133: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n5wg" in namespace "provisioning-671" to be "Succeeded or Failed"
Jun  5 00:59:50.185: INFO: Pod "pod-subpath-test-preprovisionedpv-n5wg": Phase="Pending", Reason="", readiness=false. Elapsed: 51.614939ms
Jun  5 00:59:52.238: INFO: Pod "pod-subpath-test-preprovisionedpv-n5wg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104963719s
Jun  5 00:59:54.290: INFO: Pod "pod-subpath-test-preprovisionedpv-n5wg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157345478s
STEP: Saw pod success
Jun  5 00:59:54.291: INFO: Pod "pod-subpath-test-preprovisionedpv-n5wg" satisfied condition "Succeeded or Failed"
Jun  5 00:59:54.342: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-n5wg container test-container-volume-preprovisionedpv-n5wg: <nil>
STEP: delete the pod
Jun  5 00:59:54.478: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n5wg to disappear
Jun  5 00:59:54.537: INFO: Pod pod-subpath-test-preprovisionedpv-n5wg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n5wg
Jun  5 00:59:54.537: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n5wg" in namespace "provisioning-671"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:56.609: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 177 lines ...
• [SLOW TEST:18.979 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 00:59:57.340: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 49 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
Jun  5 00:59:40.616: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun  5 00:59:40.752: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jj6l
STEP: Creating a pod to test subpath
Jun  5 00:59:40.809: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jj6l" in namespace "provisioning-5252" to be "Succeeded or Failed"
Jun  5 00:59:40.860: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 51.128198ms
Jun  5 00:59:42.913: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103437024s
Jun  5 00:59:44.964: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154871516s
Jun  5 00:59:47.016: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206371156s
Jun  5 00:59:49.067: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258186112s
Jun  5 00:59:51.119: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309854038s
Jun  5 00:59:53.183: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 12.373933559s
Jun  5 00:59:55.235: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 14.42529902s
Jun  5 00:59:57.286: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.477169041s
Jun  5 00:59:59.338: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Pending", Reason="", readiness=false. Elapsed: 18.528980479s
Jun  5 01:00:01.390: INFO: Pod "pod-subpath-test-inlinevolume-jj6l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.580844599s
STEP: Saw pod success
Jun  5 01:00:01.390: INFO: Pod "pod-subpath-test-inlinevolume-jj6l" satisfied condition "Succeeded or Failed"
Jun  5 01:00:01.443: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-jj6l container test-container-subpath-inlinevolume-jj6l: <nil>
STEP: delete the pod
Jun  5 01:00:01.558: INFO: Waiting for pod pod-subpath-test-inlinevolume-jj6l to disappear
Jun  5 01:00:01.610: INFO: Pod pod-subpath-test-inlinevolume-jj6l no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jj6l
Jun  5 01:00:01.610: INFO: Deleting pod "pod-subpath-test-inlinevolume-jj6l" in namespace "provisioning-5252"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:01.827: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:02.827: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 122 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146
Jun  5 00:59:41.144: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-127" to be "Succeeded or Failed"
Jun  5 00:59:41.196: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 51.130701ms
Jun  5 00:59:43.247: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102976014s
Jun  5 00:59:45.299: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154493339s
Jun  5 00:59:47.350: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205762082s
Jun  5 00:59:49.402: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257373871s
Jun  5 00:59:51.454: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309035355s
Jun  5 00:59:53.505: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.360457879s
Jun  5 00:59:55.557: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 14.41241383s
Jun  5 00:59:57.610: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 16.465033023s
Jun  5 00:59:59.662: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 18.517187557s
Jun  5 01:00:01.714: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 20.569041518s
Jun  5 01:00:03.765: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.620942481s
Jun  5 01:00:03.766: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:03.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-127" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:99
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:146
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:03.956: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 108 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:04.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6396" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:04.827: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:05.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-917" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:05.311: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:05.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1022" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:06.049: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 65 lines ...
Jun  5 00:59:48.157: INFO: PersistentVolumeClaim pvc-6dvl9 found but phase is Pending instead of Bound.
Jun  5 00:59:50.208: INFO: PersistentVolumeClaim pvc-6dvl9 found and phase=Bound (2.102771033s)
Jun  5 00:59:50.208: INFO: Waiting up to 3m0s for PersistentVolume local-fnsvm to have phase Bound
Jun  5 00:59:50.261: INFO: PersistentVolume local-fnsvm found and phase=Bound (52.650942ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pkm6
STEP: Creating a pod to test subpath
Jun  5 00:59:50.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pkm6" in namespace "provisioning-1941" to be "Succeeded or Failed"
Jun  5 00:59:50.472: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 52.486869ms
Jun  5 00:59:52.546: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126653272s
Jun  5 00:59:54.598: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178322853s
Jun  5 00:59:56.650: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230297129s
Jun  5 00:59:58.702: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282396993s
Jun  5 01:00:00.754: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.33459765s
Jun  5 01:00:02.806: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.387010388s
Jun  5 01:00:04.858: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.438597462s
STEP: Saw pod success
Jun  5 01:00:04.858: INFO: Pod "pod-subpath-test-preprovisionedpv-pkm6" satisfied condition "Succeeded or Failed"
Jun  5 01:00:04.909: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-pkm6 container test-container-volume-preprovisionedpv-pkm6: <nil>
STEP: delete the pod
Jun  5 01:00:05.035: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pkm6 to disappear
Jun  5 01:00:05.086: INFO: Pod pod-subpath-test-preprovisionedpv-pkm6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pkm6
Jun  5 01:00:05.086: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pkm6" in namespace "provisioning-1941"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:06.858: INFO: Only supported for providers [openstack] (not aws)
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:07.042: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 37 lines ...
• [SLOW TEST:30.097 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 49 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-cq7lv webserver-deployment-795d758f88- deployment-5276  ba42ecc8-5929-4dd3-b6dc-92c65d22d212 3152 0 2021-06-05 01:00:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 41455dbc-09d1-418a-8d8d-7e0d3a304d95 0xc003a94810 0xc003a94811}] []  [{kube-controller-manager Update v1 2021-06-05 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41455dbc-09d1-418a-8d8d-7e0d3a304d95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-06-05 01:00:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nl6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nl6m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-56-177.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.56.177,PodIP:,StartTime:2021-06-05 01:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun  5 01:00:10.187: INFO: Pod "webserver-deployment-795d758f88-fnq2r" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-fnq2r webserver-deployment-795d758f88- deployment-5276  673b61dd-a713-4387-a501-d5ab6d53b60d 3304 0 2021-06-05 01:00:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 41455dbc-09d1-418a-8d8d-7e0d3a304d95 0xc003a949a7 0xc003a949a8}] []  [{kube-controller-manager Update v1 2021-06-05 01:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41455dbc-09d1-418a-8d8d-7e0d3a304d95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nl6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nl6m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-35-190.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun  5 01:00:10.187: INFO: Pod "webserver-deployment-795d758f88-h4q8b" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-h4q8b webserver-deployment-795d758f88- deployment-5276  40b02dcb-1763-489c-b52b-90ada482d2cf 3284 0 2021-06-05 01:00:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 41455dbc-09d1-418a-8d8d-7e0d3a304d95 0xc003a94ad0 0xc003a94ad1}] []  [{kube-controller-manager Update v1 2021-06-05 01:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41455dbc-09d1-418a-8d8d-7e0d3a304d95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-06-05 01:00:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nl6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nl6m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-63-110.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.63.110,PodIP:,StartTime:2021-06-05 01:00:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun  5 01:00:10.188: INFO: Pod "webserver-deployment-795d758f88-ngsvh" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-ngsvh webserver-deployment-795d758f88- deployment-5276  bacd0dee-0964-4dbc-b571-974e71eed834 3358 0 2021-06-05 01:00:05 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 41455dbc-09d1-418a-8d8d-7e0d3a304d95 0xc003a94c67 0xc003a94c68}] []  [{kube-controller-manager Update v1 2021-06-05 01:00:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41455dbc-09d1-418a-8d8d-7e0d3a304d95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-06-05 01:00:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.2.248\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nl6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nl6m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-198.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.198,PodIP:100.96.2.248,StartTime:2021-06-05 01:00:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.2.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun  5 01:00:10.188: INFO: Pod "webserver-deployment-795d758f88-q68kw" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-q68kw webserver-deployment-795d758f88- deployment-5276  564e9691-c162-45e9-8f6a-9905a25b278c 3315 0 2021-06-05 01:00:08 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 41455dbc-09d1-418a-8d8d-7e0d3a304d95 0xc003a94e30 0xc003a94e31}] []  [{kube-controller-manager Update v1 2021-06-05 01:00:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41455dbc-09d1-418a-8d8d-7e0d3a304d95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nl6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nl6m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-56-177.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun  5 01:00:10.188: INFO: Pod "webserver-deployment-795d758f88-vq7p9" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-vq7p9 webserver-deployment-795d758f88- deployment-5276  15c66e8b-04e8-4dbc-9e34-da8bd83d54a4 3293 0 2021-06-05 01:00:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 41455dbc-09d1-418a-8d8d-7e0d3a304d95 0xc003a94f60 0xc003a94f61}] []  [{kube-controller-manager Update v1 2021-06-05 01:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41455dbc-09d1-418a-8d8d-7e0d3a304d95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-06-05 01:00:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nl6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nl6m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-63-110.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.63.110,PodIP:,StartTime:2021-06-05 01:00:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun  5 01:00:10.188: INFO: Pod "webserver-deployment-795d758f88-xld2m" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-xld2m webserver-deployment-795d758f88- deployment-5276  7aaf8c04-6b31-4441-b619-44c01c3b9671 3289 0 2021-06-05 01:00:07 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 41455dbc-09d1-418a-8d8d-7e0d3a304d95 0xc003a950f7 0xc003a950f8}] []  [{kube-controller-manager Update v1 2021-06-05 01:00:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41455dbc-09d1-418a-8d8d-7e0d3a304d95\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-06-05 01:00:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6nl6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6nl6m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6nl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-198.us-west-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-06-05 01:00:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.198,PodIP:,StartTime:2021-06-05 01:00:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 48 lines ...
• [SLOW TEST:31.920 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:10.325: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 108 lines ...
• [SLOW TEST:11.746 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:13.602: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 105 lines ...
• [SLOW TEST:11.735 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:516
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]","total":-1,"completed":4,"skipped":20,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:16.607: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 43 lines ...
Jun  5 01:00:02.832: INFO: PersistentVolumeClaim pvc-xn59d found but phase is Pending instead of Bound.
Jun  5 01:00:04.885: INFO: PersistentVolumeClaim pvc-xn59d found and phase=Bound (10.313481848s)
Jun  5 01:00:04.885: INFO: Waiting up to 3m0s for PersistentVolume local-jzrs7 to have phase Bound
Jun  5 01:00:04.943: INFO: PersistentVolume local-jzrs7 found and phase=Bound (58.709826ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jl9c
STEP: Creating a pod to test subpath
Jun  5 01:00:05.101: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jl9c" in namespace "provisioning-4039" to be "Succeeded or Failed"
Jun  5 01:00:05.169: INFO: Pod "pod-subpath-test-preprovisionedpv-jl9c": Phase="Pending", Reason="", readiness=false. Elapsed: 67.889234ms
Jun  5 01:00:07.223: INFO: Pod "pod-subpath-test-preprovisionedpv-jl9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122146045s
Jun  5 01:00:09.275: INFO: Pod "pod-subpath-test-preprovisionedpv-jl9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174533059s
Jun  5 01:00:11.331: INFO: Pod "pod-subpath-test-preprovisionedpv-jl9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230326961s
Jun  5 01:00:13.384: INFO: Pod "pod-subpath-test-preprovisionedpv-jl9c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.282779253s
Jun  5 01:00:15.445: INFO: Pod "pod-subpath-test-preprovisionedpv-jl9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.344360894s
STEP: Saw pod success
Jun  5 01:00:15.445: INFO: Pod "pod-subpath-test-preprovisionedpv-jl9c" satisfied condition "Succeeded or Failed"
Jun  5 01:00:15.499: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-jl9c container test-container-subpath-preprovisionedpv-jl9c: <nil>
STEP: delete the pod
Jun  5 01:00:15.668: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jl9c to disappear
Jun  5 01:00:15.721: INFO: Pod pod-subpath-test-preprovisionedpv-jl9c no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jl9c
Jun  5 01:00:15.721: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jl9c" in namespace "provisioning-4039"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:16.843: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:18.152: INFO: Only supported for providers [azure] (not aws)
... skipping 92 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:18.214: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 75 lines ...
• [SLOW TEST:15.806 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 104 lines ...
• [SLOW TEST:45.167 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:129
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":1,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:23.246: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
Jun  5 01:00:10.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
Jun  5 01:00:10.664: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun  5 01:00:10.769: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4696" in namespace "provisioning-4696" to be "Succeeded or Failed"
Jun  5 01:00:10.819: INFO: Pod "hostpath-symlink-prep-provisioning-4696": Phase="Pending", Reason="", readiness=false. Elapsed: 50.269885ms
Jun  5 01:00:12.870: INFO: Pod "hostpath-symlink-prep-provisioning-4696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.100896279s
STEP: Saw pod success
Jun  5 01:00:12.870: INFO: Pod "hostpath-symlink-prep-provisioning-4696" satisfied condition "Succeeded or Failed"
Jun  5 01:00:12.870: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4696" in namespace "provisioning-4696"
Jun  5 01:00:12.926: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4696" to be fully deleted
Jun  5 01:00:12.976: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-sxcg
STEP: Creating a pod to test subpath
Jun  5 01:00:13.029: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-sxcg" in namespace "provisioning-4696" to be "Succeeded or Failed"
Jun  5 01:00:13.079: INFO: Pod "pod-subpath-test-inlinevolume-sxcg": Phase="Pending", Reason="", readiness=false. Elapsed: 50.156224ms
Jun  5 01:00:15.130: INFO: Pod "pod-subpath-test-inlinevolume-sxcg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101130985s
Jun  5 01:00:17.180: INFO: Pod "pod-subpath-test-inlinevolume-sxcg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151591998s
Jun  5 01:00:19.231: INFO: Pod "pod-subpath-test-inlinevolume-sxcg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202234675s
STEP: Saw pod success
Jun  5 01:00:19.231: INFO: Pod "pod-subpath-test-inlinevolume-sxcg" satisfied condition "Succeeded or Failed"
Jun  5 01:00:19.281: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-sxcg container test-container-subpath-inlinevolume-sxcg: <nil>
STEP: delete the pod
Jun  5 01:00:19.395: INFO: Waiting for pod pod-subpath-test-inlinevolume-sxcg to disappear
Jun  5 01:00:19.444: INFO: Pod pod-subpath-test-inlinevolume-sxcg no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-sxcg
Jun  5 01:00:19.444: INFO: Deleting pod "pod-subpath-test-inlinevolume-sxcg" in namespace "provisioning-4696"
STEP: Deleting pod
Jun  5 01:00:19.494: INFO: Deleting pod "pod-subpath-test-inlinevolume-sxcg" in namespace "provisioning-4696"
Jun  5 01:00:19.599: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4696" in namespace "provisioning-4696" to be "Succeeded or Failed"
Jun  5 01:00:19.649: INFO: Pod "hostpath-symlink-prep-provisioning-4696": Phase="Pending", Reason="", readiness=false. Elapsed: 49.869387ms
Jun  5 01:00:21.699: INFO: Pod "hostpath-symlink-prep-provisioning-4696": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100495106s
Jun  5 01:00:23.750: INFO: Pod "hostpath-symlink-prep-provisioning-4696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151307021s
STEP: Saw pod success
Jun  5 01:00:23.750: INFO: Pod "hostpath-symlink-prep-provisioning-4696" satisfied condition "Succeeded or Failed"
Jun  5 01:00:23.750: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4696" in namespace "provisioning-4696"
Jun  5 01:00:23.815: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4696" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:23.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4696" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 62 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:27.151: INFO: Only supported for providers [azure] (not aws)
... skipping 89 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-21d4a125-d0e7-4a9e-8780-c6a2839e5444
STEP: Creating a pod to test consume configMaps
Jun  5 01:00:18.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720" in namespace "configmap-3057" to be "Succeeded or Failed"
Jun  5 01:00:18.651: INFO: Pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720": Phase="Pending", Reason="", readiness=false. Elapsed: 53.054545ms
Jun  5 01:00:20.703: INFO: Pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105626814s
Jun  5 01:00:22.756: INFO: Pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157771679s
Jun  5 01:00:24.808: INFO: Pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210272806s
Jun  5 01:00:26.860: INFO: Pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262449955s
Jun  5 01:00:28.913: INFO: Pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.315218026s
STEP: Saw pod success
Jun  5 01:00:28.913: INFO: Pod "pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720" satisfied condition "Succeeded or Failed"
Jun  5 01:00:28.965: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:00:29.174: INFO: Waiting for pod pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720 to disappear
Jun  5 01:00:29.239: INFO: Pod pod-configmaps-af052f83-a9a7-4d5b-acf4-69ff29b2d720 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.121 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:59
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 34 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-a0b9ff9d-f178-4392-b241-7dba5104221c
STEP: Creating a pod to test consume configMaps
Jun  5 01:00:23.086: INFO: Waiting up to 5m0s for pod "pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4" in namespace "configmap-6356" to be "Succeeded or Failed"
Jun  5 01:00:23.137: INFO: Pod "pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4": Phase="Pending", Reason="", readiness=false. Elapsed: 51.342281ms
Jun  5 01:00:25.189: INFO: Pod "pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102816875s
Jun  5 01:00:27.241: INFO: Pod "pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154520213s
Jun  5 01:00:29.295: INFO: Pod "pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208622206s
Jun  5 01:00:31.347: INFO: Pod "pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.260465684s
STEP: Saw pod success
Jun  5 01:00:31.347: INFO: Pod "pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4" satisfied condition "Succeeded or Failed"
Jun  5 01:00:31.398: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:00:31.522: INFO: Waiting for pod pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4 to disappear
Jun  5 01:00:31.578: INFO: Pod pod-configmaps-3480f1eb-014d-4c5d-b361-76931a6995e4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.968 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:31.693: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 22 lines ...
Jun  5 00:59:46.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
Jun  5 00:59:47.184: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun  5 00:59:47.296: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3002" in namespace "provisioning-3002" to be "Succeeded or Failed"
Jun  5 00:59:47.348: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 52.174086ms
Jun  5 00:59:49.401: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105077528s
Jun  5 00:59:51.454: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157395557s
Jun  5 00:59:53.506: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209602164s
Jun  5 00:59:55.558: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262028737s
Jun  5 00:59:57.617: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 10.320239025s
Jun  5 00:59:59.669: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 12.372930023s
Jun  5 01:00:01.722: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 14.42565861s
Jun  5 01:00:03.774: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 16.478014343s
Jun  5 01:00:05.827: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.5304891s
STEP: Saw pod success
Jun  5 01:00:05.827: INFO: Pod "hostpath-symlink-prep-provisioning-3002" satisfied condition "Succeeded or Failed"
Jun  5 01:00:05.827: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3002" in namespace "provisioning-3002"
Jun  5 01:00:05.887: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3002" to be fully deleted
Jun  5 01:00:05.941: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-n9x2
STEP: Creating a pod to test atomic-volume-subpath
Jun  5 01:00:05.995: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-n9x2" in namespace "provisioning-3002" to be "Succeeded or Failed"
Jun  5 01:00:06.047: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Pending", Reason="", readiness=false. Elapsed: 52.135632ms
Jun  5 01:00:08.100: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104918022s
Jun  5 01:00:10.152: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157407505s
Jun  5 01:00:12.205: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Running", Reason="", readiness=true. Elapsed: 6.209965793s
Jun  5 01:00:14.257: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Running", Reason="", readiness=true. Elapsed: 8.262580333s
Jun  5 01:00:16.310: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Running", Reason="", readiness=true. Elapsed: 10.314999944s
... skipping 2 lines ...
Jun  5 01:00:22.467: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Running", Reason="", readiness=true. Elapsed: 16.472736115s
Jun  5 01:00:24.524: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Running", Reason="", readiness=true. Elapsed: 18.529223198s
Jun  5 01:00:26.578: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Running", Reason="", readiness=true. Elapsed: 20.582929789s
Jun  5 01:00:28.631: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Running", Reason="", readiness=true. Elapsed: 22.636751609s
Jun  5 01:00:30.684: INFO: Pod "pod-subpath-test-inlinevolume-n9x2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.689498018s
STEP: Saw pod success
Jun  5 01:00:30.684: INFO: Pod "pod-subpath-test-inlinevolume-n9x2" satisfied condition "Succeeded or Failed"
Jun  5 01:00:30.736: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-n9x2 container test-container-subpath-inlinevolume-n9x2: <nil>
STEP: delete the pod
Jun  5 01:00:30.854: INFO: Waiting for pod pod-subpath-test-inlinevolume-n9x2 to disappear
Jun  5 01:00:30.906: INFO: Pod pod-subpath-test-inlinevolume-n9x2 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-n9x2
Jun  5 01:00:30.906: INFO: Deleting pod "pod-subpath-test-inlinevolume-n9x2" in namespace "provisioning-3002"
STEP: Deleting pod
Jun  5 01:00:30.958: INFO: Deleting pod "pod-subpath-test-inlinevolume-n9x2" in namespace "provisioning-3002"
Jun  5 01:00:31.067: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3002" in namespace "provisioning-3002" to be "Succeeded or Failed"
Jun  5 01:00:31.119: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Pending", Reason="", readiness=false. Elapsed: 52.536777ms
Jun  5 01:00:33.172: INFO: Pod "hostpath-symlink-prep-provisioning-3002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.105237741s
STEP: Saw pod success
Jun  5 01:00:33.172: INFO: Pod "hostpath-symlink-prep-provisioning-3002" satisfied condition "Succeeded or Failed"
Jun  5 01:00:33.172: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3002" in namespace "provisioning-3002"
Jun  5 01:00:33.232: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3002" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:33.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3002" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:34.858: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 17 lines ...
STEP: Creating a kubernetes client
Jun  5 01:00:29.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Jun  5 01:00:30.015: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:35.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-523" for this suite.


• [SLOW TEST:5.679 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:57.891 seconds]
[k8s.io] [sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:36.292: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 101 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:36.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5499" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:37.100: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 113 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:37.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":3,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:37.329: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
• [SLOW TEST:13.465 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:89
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:37.462: INFO: Only supported for providers [openstack] (not aws)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "nfs" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
... skipping 5 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun  5 01:00:35.172: INFO: Waiting up to 5m0s for pod "pod-883f5eb0-352a-4423-8032-2a30b08907b3" in namespace "emptydir-4343" to be "Succeeded or Failed"
Jun  5 01:00:35.222: INFO: Pod "pod-883f5eb0-352a-4423-8032-2a30b08907b3": Phase="Pending", Reason="", readiness=false. Elapsed: 49.956062ms
Jun  5 01:00:37.273: INFO: Pod "pod-883f5eb0-352a-4423-8032-2a30b08907b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100365438s
Jun  5 01:00:39.342: INFO: Pod "pod-883f5eb0-352a-4423-8032-2a30b08907b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16985185s
STEP: Saw pod success
Jun  5 01:00:39.342: INFO: Pod "pod-883f5eb0-352a-4423-8032-2a30b08907b3" satisfied condition "Succeeded or Failed"
Jun  5 01:00:39.427: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-883f5eb0-352a-4423-8032-2a30b08907b3 container test-container: <nil>
STEP: delete the pod
Jun  5 01:00:39.626: INFO: Waiting for pod pod-883f5eb0-352a-4423-8032-2a30b08907b3 to disappear
Jun  5 01:00:39.676: INFO: Pod pod-883f5eb0-352a-4423-8032-2a30b08907b3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:39.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4343" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":3,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:40.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3818" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:40.489: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:382
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":4,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:44.001: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 198 lines ...
Jun  5 01:00:36.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  5 01:00:38.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jun  5 01:00:40.641: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451634, loc:(*time.Location)(0x7977f00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun  5 01:00:43.699: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:44.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9922" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101


• [SLOW TEST:11.067 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:44.495: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 49 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
Jun  5 01:00:41.123: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun  5 01:00:41.175: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9fm8
STEP: Creating a pod to test subpath
Jun  5 01:00:41.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9fm8" in namespace "provisioning-1965" to be "Succeeded or Failed"
Jun  5 01:00:41.278: INFO: Pod "pod-subpath-test-inlinevolume-9fm8": Phase="Pending", Reason="", readiness=false. Elapsed: 49.984709ms
Jun  5 01:00:43.328: INFO: Pod "pod-subpath-test-inlinevolume-9fm8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100336912s
Jun  5 01:00:45.380: INFO: Pod "pod-subpath-test-inlinevolume-9fm8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151386414s
STEP: Saw pod success
Jun  5 01:00:45.380: INFO: Pod "pod-subpath-test-inlinevolume-9fm8" satisfied condition "Succeeded or Failed"
Jun  5 01:00:45.430: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-9fm8 container test-container-volume-inlinevolume-9fm8: <nil>
STEP: delete the pod
Jun  5 01:00:45.556: INFO: Waiting for pod pod-subpath-test-inlinevolume-9fm8 to disappear
Jun  5 01:00:45.608: INFO: Pod pod-subpath-test-inlinevolume-9fm8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9fm8
Jun  5 01:00:45.608: INFO: Deleting pod "pod-subpath-test-inlinevolume-9fm8" in namespace "provisioning-1965"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:45.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1965" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:45.828: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
Jun  5 01:00:02.040: INFO: PersistentVolumeClaim pvc-wqprf found but phase is Pending instead of Bound.
Jun  5 01:00:04.092: INFO: PersistentVolumeClaim pvc-wqprf found and phase=Bound (6.216593644s)
Jun  5 01:00:04.093: INFO: Waiting up to 3m0s for PersistentVolume aws-7njjz to have phase Bound
Jun  5 01:00:04.148: INFO: PersistentVolume aws-7njjz found and phase=Bound (55.937468ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-f4s5
STEP: Creating a pod to test exec-volume-test
Jun  5 01:00:04.310: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-f4s5" in namespace "volume-6841" to be "Succeeded or Failed"
Jun  5 01:00:04.362: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 51.251972ms
Jun  5 01:00:06.414: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103790216s
Jun  5 01:00:08.466: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155550109s
Jun  5 01:00:10.517: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207086931s
Jun  5 01:00:12.569: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.259054734s
Jun  5 01:00:14.621: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.310905586s
... skipping 5 lines ...
Jun  5 01:00:26.937: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.626295465s
Jun  5 01:00:28.997: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.686177175s
Jun  5 01:00:31.049: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.738233194s
Jun  5 01:00:33.101: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.79029138s
Jun  5 01:00:35.154: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.843099647s
STEP: Saw pod success
Jun  5 01:00:35.154: INFO: Pod "exec-volume-test-preprovisionedpv-f4s5" satisfied condition "Succeeded or Failed"
Jun  5 01:00:35.205: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-f4s5 container exec-container-preprovisionedpv-f4s5: <nil>
STEP: delete the pod
Jun  5 01:00:35.328: INFO: Waiting for pod exec-volume-test-preprovisionedpv-f4s5 to disappear
Jun  5 01:00:35.379: INFO: Pod exec-volume-test-preprovisionedpv-f4s5 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-f4s5
Jun  5 01:00:35.379: INFO: Deleting pod "exec-volume-test-preprovisionedpv-f4s5" in namespace "volume-6841"
STEP: Deleting pv and pvc
Jun  5 01:00:35.431: INFO: Deleting PersistentVolumeClaim "pvc-wqprf"
Jun  5 01:00:35.483: INFO: Deleting PersistentVolume "aws-7njjz"
Jun  5 01:00:35.707: INFO: Couldn't delete PD "aws://us-west-1a/vol-06e97af65443fe8e8", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-06e97af65443fe8e8 is currently attached to i-05d5f0c0f1f573a5d
	status code: 400, request id: 763e0b94-37f2-4f6f-8640-83c90ebd9c0b
Jun  5 01:00:41.061: INFO: Couldn't delete PD "aws://us-west-1a/vol-06e97af65443fe8e8", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-06e97af65443fe8e8 is currently attached to i-05d5f0c0f1f573a5d
	status code: 400, request id: f0c3bf31-14e9-422a-96b3-28f48c3ea4ed
Jun  5 01:00:46.427: INFO: Successfully deleted PD "aws://us-west-1a/vol-06e97af65443fe8e8".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:46.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6841" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:46.563: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 113 lines ...
Jun  5 01:00:18.424: INFO: PersistentVolumeClaim pvc-2wk5j found but phase is Pending instead of Bound.
Jun  5 01:00:20.474: INFO: PersistentVolumeClaim pvc-2wk5j found and phase=Bound (16.458437237s)
Jun  5 01:00:20.474: INFO: Waiting up to 3m0s for PersistentVolume local-ctfj9 to have phase Bound
Jun  5 01:00:20.525: INFO: PersistentVolume local-ctfj9 found and phase=Bound (50.858757ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nm4n
STEP: Creating a pod to test subpath
Jun  5 01:00:20.678: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nm4n" in namespace "provisioning-7515" to be "Succeeded or Failed"
Jun  5 01:00:20.728: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 50.274609ms
Jun  5 01:00:22.780: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101768515s
Jun  5 01:00:24.830: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152407245s
Jun  5 01:00:26.881: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202973234s
Jun  5 01:00:28.933: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255366947s
Jun  5 01:00:30.984: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306211439s
Jun  5 01:00:33.036: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 12.357954247s
Jun  5 01:00:35.087: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.409542626s
STEP: Saw pod success
Jun  5 01:00:35.087: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n" satisfied condition "Succeeded or Failed"
Jun  5 01:00:35.138: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-nm4n container test-container-subpath-preprovisionedpv-nm4n: <nil>
STEP: delete the pod
Jun  5 01:00:35.254: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nm4n to disappear
Jun  5 01:00:35.305: INFO: Pod pod-subpath-test-preprovisionedpv-nm4n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nm4n
Jun  5 01:00:35.305: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nm4n" in namespace "provisioning-7515"
STEP: Creating pod pod-subpath-test-preprovisionedpv-nm4n
STEP: Creating a pod to test subpath
Jun  5 01:00:35.407: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nm4n" in namespace "provisioning-7515" to be "Succeeded or Failed"
Jun  5 01:00:35.457: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 50.050698ms
Jun  5 01:00:37.508: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100797339s
Jun  5 01:00:39.577: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170233102s
Jun  5 01:00:41.628: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221060007s
Jun  5 01:00:43.679: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.271786881s
Jun  5 01:00:45.730: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.322416037s
STEP: Saw pod success
Jun  5 01:00:45.730: INFO: Pod "pod-subpath-test-preprovisionedpv-nm4n" satisfied condition "Succeeded or Failed"
Jun  5 01:00:45.780: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-nm4n container test-container-subpath-preprovisionedpv-nm4n: <nil>
STEP: delete the pod
Jun  5 01:00:45.909: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nm4n to disappear
Jun  5 01:00:45.961: INFO: Pod pod-subpath-test-preprovisionedpv-nm4n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nm4n
Jun  5 01:00:45.961: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nm4n" in namespace "provisioning-7515"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:47.266: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 63 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-7c174e4a-1964-4e89-9a6c-30b78fddce80
STEP: Creating a pod to test consume configMaps
Jun  5 01:00:44.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac9c17a4-9729-4a89-b6a0-c3e279cf4463" in namespace "projected-6173" to be "Succeeded or Failed"
Jun  5 01:00:44.981: INFO: Pod "pod-projected-configmaps-ac9c17a4-9729-4a89-b6a0-c3e279cf4463": Phase="Pending", Reason="", readiness=false. Elapsed: 52.151381ms
Jun  5 01:00:47.034: INFO: Pod "pod-projected-configmaps-ac9c17a4-9729-4a89-b6a0-c3e279cf4463": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104565999s
STEP: Saw pod success
Jun  5 01:00:47.034: INFO: Pod "pod-projected-configmaps-ac9c17a4-9729-4a89-b6a0-c3e279cf4463" satisfied condition "Succeeded or Failed"
Jun  5 01:00:47.086: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-projected-configmaps-ac9c17a4-9729-4a89-b6a0-c3e279cf4463 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:00:47.205: INFO: Waiting for pod pod-projected-configmaps-ac9c17a4-9729-4a89-b6a0-c3e279cf4463 to disappear
Jun  5 01:00:47.259: INFO: Pod pod-projected-configmaps-ac9c17a4-9729-4a89-b6a0-c3e279cf4463 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:47.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6173" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:47.375: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 139 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:47.483: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 76 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:988
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1007
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":4,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:50.712: INFO: Driver local doesn't support ntfs -- skipping
... skipping 225 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-62a90357-6b14-4be6-bb01-a97d54878941
STEP: Creating a pod to test consume secrets
Jun  5 01:00:46.970: INFO: Waiting up to 5m0s for pod "pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849" in namespace "secrets-8890" to be "Succeeded or Failed"
Jun  5 01:00:47.022: INFO: Pod "pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849": Phase="Pending", Reason="", readiness=false. Elapsed: 51.480693ms
Jun  5 01:00:49.078: INFO: Pod "pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107627547s
Jun  5 01:00:51.130: INFO: Pod "pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159854805s
STEP: Saw pod success
Jun  5 01:00:51.130: INFO: Pod "pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849" satisfied condition "Succeeded or Failed"
Jun  5 01:00:51.182: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849 container secret-volume-test: <nil>
STEP: delete the pod
Jun  5 01:00:51.321: INFO: Waiting for pod pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849 to disappear
Jun  5 01:00:51.377: INFO: Pod pod-secrets-9e9406e2-0e03-4d82-bac7-9412abad6849 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:51.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8890" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 11 lines ...
Jun  5 00:59:38.419: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun  5 00:59:38.419: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun  5 00:59:38.419: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-6420-aws-scqc4fm      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-6420    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-6420-aws-scqc4fm,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-6420    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-6420-aws-scqc4fm,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-6420-aws-scqc4fm
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Jun  5 00:59:38.729: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-ssp5l" in namespace "provisioning-6420" to be "Succeeded or Failed"
Jun  5 00:59:38.804: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 75.459423ms
Jun  5 00:59:40.855: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126591941s
Jun  5 00:59:42.906: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177548959s
Jun  5 00:59:44.958: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228741074s
Jun  5 00:59:47.009: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280033893s
Jun  5 00:59:49.061: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.331627771s
... skipping 7 lines ...
Jun  5 01:00:05.483: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 26.75432194s
Jun  5 01:00:07.537: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 28.808303005s
Jun  5 01:00:09.588: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 30.859493595s
Jun  5 01:00:11.640: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Pending", Reason="", readiness=false. Elapsed: 32.910656856s
Jun  5 01:00:13.691: INFO: Pod "pvc-volume-tester-writer-ssp5l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.961762969s
STEP: Saw pod success
Jun  5 01:00:13.691: INFO: Pod "pvc-volume-tester-writer-ssp5l" satisfied condition "Succeeded or Failed"
Jun  5 01:00:13.796: INFO: Pod pvc-volume-tester-writer-ssp5l has the following logs: 
Jun  5 01:00:13.796: INFO: Deleting pod "pvc-volume-tester-writer-ssp5l" in namespace "provisioning-6420"
Jun  5 01:00:13.855: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-ssp5l" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-63-110.us-west-1.compute.internal"
Jun  5 01:00:14.065: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-nrhkl" in namespace "provisioning-6420" to be "Succeeded or Failed"
Jun  5 01:00:14.115: INFO: Pod "pvc-volume-tester-reader-nrhkl": Phase="Pending", Reason="", readiness=false. Elapsed: 50.704853ms
Jun  5 01:00:16.172: INFO: Pod "pvc-volume-tester-reader-nrhkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10740027s
Jun  5 01:00:18.224: INFO: Pod "pvc-volume-tester-reader-nrhkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15873951s
Jun  5 01:00:20.275: INFO: Pod "pvc-volume-tester-reader-nrhkl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209965696s
Jun  5 01:00:22.326: INFO: Pod "pvc-volume-tester-reader-nrhkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.261228995s
STEP: Saw pod success
Jun  5 01:00:22.326: INFO: Pod "pvc-volume-tester-reader-nrhkl" satisfied condition "Succeeded or Failed"
Jun  5 01:00:22.385: INFO: Pod pvc-volume-tester-reader-nrhkl has the following logs: hello world

Jun  5 01:00:22.385: INFO: Deleting pod "pvc-volume-tester-reader-nrhkl" in namespace "provisioning-6420"
Jun  5 01:00:22.441: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-nrhkl" to be fully deleted
Jun  5 01:00:22.492: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-gs2r5] to have phase Bound
Jun  5 01:00:22.543: INFO: PersistentVolumeClaim pvc-gs2r5 found and phase=Bound (50.777238ms)
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":1,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:53.342: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 114 lines ...
Jun  5 01:00:47.821: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-3789.svc.cluster.local:80/\n+ test 28 -ne 0\n"
Jun  5 01:00:47.821: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Jun  5 01:00:47.926: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3789 exec execpod-rwk4d -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-3789.svc.cluster.local:80/'
Jun  5 01:00:51.333: INFO: rc: 28
Jun  5 01:00:51.333: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3789 exec execpod-rwk4d -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-3789.svc.cluster.local:80/:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-3789.svc.cluster.local:80/
command terminated with exit code 28

error:
exit status 28
Jun  5 01:00:53.334: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3789 exec execpod-rwk4d -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-3789.svc.cluster.local:80/'
Jun  5 01:00:54.050: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-3789.svc.cluster.local:80/\n"
Jun  5 01:00:54.050: INFO: stdout: "NOW: 2021-06-05 01:00:53.98214797 +0000 UTC m=+25.101455542"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-3789
... skipping 9 lines ...
• [SLOW TEST:27.291 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1974
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:00:54.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:54.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7444" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":4,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:54.872: INFO: Only supported for providers [gce gke] (not aws)
... skipping 37 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:265
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:00:46.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Jun  5 01:00:53.693: INFO: Waiting up to 5m0s for pod "metadata-volume-e059465a-66de-4faa-9a76-1efd6d433e58" in namespace "downward-api-9815" to be "Succeeded or Failed"
Jun  5 01:00:53.743: INFO: Pod "metadata-volume-e059465a-66de-4faa-9a76-1efd6d433e58": Phase="Pending", Reason="", readiness=false. Elapsed: 50.5026ms
Jun  5 01:00:55.796: INFO: Pod "metadata-volume-e059465a-66de-4faa-9a76-1efd6d433e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103295524s
STEP: Saw pod success
Jun  5 01:00:55.796: INFO: Pod "metadata-volume-e059465a-66de-4faa-9a76-1efd6d433e58" satisfied condition "Succeeded or Failed"
Jun  5 01:00:55.850: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod metadata-volume-e059465a-66de-4faa-9a76-1efd6d433e58 container client-container: <nil>
STEP: delete the pod
Jun  5 01:00:55.964: INFO: Waiting for pod metadata-volume-e059465a-66de-4faa-9a76-1efd6d433e58 to disappear
Jun  5 01:00:56.023: INFO: Pod metadata-volume-e059465a-66de-4faa-9a76-1efd6d433e58 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:00:56.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9815" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:56.138: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 203 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":5,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:00:56.750: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 51 lines ...
Jun  5 01:00:05.067: INFO: PersistentVolume nfs-mnmc8 found and phase=Bound (51.875097ms)
Jun  5 01:00:05.126: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-qzdmx] to have phase Bound
Jun  5 01:00:05.185: INFO: PersistentVolumeClaim pvc-qzdmx found and phase=Bound (59.290817ms)
STEP: Checking pod has write access to PersistentVolumes
Jun  5 01:00:05.237: INFO: Creating nfs test pod
Jun  5 01:00:05.291: INFO: Pod should terminate with exitcode 0 (success)
Jun  5 01:00:05.291: INFO: Waiting up to 5m0s for pod "pvc-tester-cx527" in namespace "pv-8466" to be "Succeeded or Failed"
Jun  5 01:00:05.343: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 52.001509ms
Jun  5 01:00:07.395: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104341366s
Jun  5 01:00:09.448: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156790422s
Jun  5 01:00:11.503: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212232731s
Jun  5 01:00:13.556: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 8.264774545s
Jun  5 01:00:15.612: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 10.320931322s
... skipping 4 lines ...
Jun  5 01:00:25.876: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 20.584685272s
Jun  5 01:00:27.928: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 22.637601582s
Jun  5 01:00:29.981: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 24.689868139s
Jun  5 01:00:32.035: INFO: Pod "pvc-tester-cx527": Phase="Pending", Reason="", readiness=false. Elapsed: 26.743798039s
Jun  5 01:00:34.087: INFO: Pod "pvc-tester-cx527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.796302556s
STEP: Saw pod success
Jun  5 01:00:34.087: INFO: Pod "pvc-tester-cx527" satisfied condition "Succeeded or Failed"
Jun  5 01:00:34.087: INFO: Pod pvc-tester-cx527 succeeded 
Jun  5 01:00:34.087: INFO: Deleting pod "pvc-tester-cx527" in namespace "pv-8466"
Jun  5 01:00:34.144: INFO: Wait up to 5m0s for pod "pvc-tester-cx527" to be fully deleted
Jun  5 01:00:34.248: INFO: Creating nfs test pod
Jun  5 01:00:34.305: INFO: Pod should terminate with exitcode 0 (success)
Jun  5 01:00:34.305: INFO: Waiting up to 5m0s for pod "pvc-tester-4qd5b" in namespace "pv-8466" to be "Succeeded or Failed"
Jun  5 01:00:34.360: INFO: Pod "pvc-tester-4qd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.739149ms
Jun  5 01:00:36.413: INFO: Pod "pvc-tester-4qd5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107526587s
Jun  5 01:00:38.555: INFO: Pod "pvc-tester-4qd5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.249921823s
STEP: Saw pod success
Jun  5 01:00:38.555: INFO: Pod "pvc-tester-4qd5b" satisfied condition "Succeeded or Failed"
Jun  5 01:00:38.555: INFO: Pod pvc-tester-4qd5b succeeded 
Jun  5 01:00:38.555: INFO: Deleting pod "pvc-tester-4qd5b" in namespace "pv-8466"
Jun  5 01:00:38.646: INFO: Wait up to 5m0s for pod "pvc-tester-4qd5b" to be fully deleted
Jun  5 01:00:38.752: INFO: Creating nfs test pod
Jun  5 01:00:38.811: INFO: Pod should terminate with exitcode 0 (success)
Jun  5 01:00:38.811: INFO: Waiting up to 5m0s for pod "pvc-tester-c8jxz" in namespace "pv-8466" to be "Succeeded or Failed"
Jun  5 01:00:38.864: INFO: Pod "pvc-tester-c8jxz": Phase="Pending", Reason="", readiness=false. Elapsed: 52.544066ms
Jun  5 01:00:40.916: INFO: Pod "pvc-tester-c8jxz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105255385s
Jun  5 01:00:42.969: INFO: Pod "pvc-tester-c8jxz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157510635s
STEP: Saw pod success
Jun  5 01:00:42.969: INFO: Pod "pvc-tester-c8jxz" satisfied condition "Succeeded or Failed"
Jun  5 01:00:42.969: INFO: Pod pvc-tester-c8jxz succeeded 
Jun  5 01:00:42.969: INFO: Deleting pod "pvc-tester-c8jxz" in namespace "pv-8466"
Jun  5 01:00:43.026: INFO: Wait up to 5m0s for pod "pvc-tester-c8jxz" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Jun  5 01:00:43.182: INFO: Deleting PVC pvc-xfx6c to trigger reclamation of PV nfs-gmv9g
Jun  5 01:00:43.183: INFO: Deleting PersistentVolumeClaim "pvc-xfx6c"
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 3 PVs and 3 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:243
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 60 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
... skipping 128 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":7,"skipped":21,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:00:54.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:5.978 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":8,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:00.963: INFO: Only supported for providers [gce gke] (not aws)
... skipping 134 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1094
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":4,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:01.016: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:00:56.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-a95c31b3-53d4-4573-ad94-5d5d49a25f50
STEP: Creating a pod to test consume configMaps
Jun  5 01:00:56.493: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6" in namespace "projected-7465" to be "Succeeded or Failed"
Jun  5 01:00:56.547: INFO: Pod "pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6": Phase="Pending", Reason="", readiness=false. Elapsed: 53.301403ms
Jun  5 01:00:58.597: INFO: Pod "pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103807449s
Jun  5 01:01:00.652: INFO: Pod "pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158775852s
Jun  5 01:01:02.703: INFO: Pod "pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209117966s
STEP: Saw pod success
Jun  5 01:01:02.703: INFO: Pod "pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6" satisfied condition "Succeeded or Failed"
Jun  5 01:01:02.753: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:01:02.873: INFO: Waiting for pod pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6 to disappear
Jun  5 01:01:02.923: INFO: Pod pod-projected-configmaps-1194fe22-5a67-4d09-9188-b06fee90afb6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.903 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:03.060: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 20 lines ...
• [SLOW TEST:5.503 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:04.547: INFO: Only supported for providers [gce gke] (not aws)
... skipping 66 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-aca09e96-7b58-4722-9bed-fa5faa19067b
STEP: Creating a pod to test consume configMaps
Jun  5 01:01:01.569: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66" in namespace "projected-9272" to be "Succeeded or Failed"
Jun  5 01:01:01.622: INFO: Pod "pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66": Phase="Pending", Reason="", readiness=false. Elapsed: 52.993808ms
Jun  5 01:01:03.704: INFO: Pod "pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134943025s
Jun  5 01:01:05.765: INFO: Pod "pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196151191s
Jun  5 01:01:07.816: INFO: Pod "pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.247638224s
STEP: Saw pod success
Jun  5 01:01:07.816: INFO: Pod "pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66" satisfied condition "Succeeded or Failed"
Jun  5 01:01:07.871: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:01:08.001: INFO: Waiting for pod pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66 to disappear
Jun  5 01:01:08.075: INFO: Pod pod-projected-configmaps-1047ec94-474b-4e19-918d-48672945dd66 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.163 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:08.207: INFO: Only supported for providers [gce gke] (not aws)
... skipping 491 lines ...
• [SLOW TEST:92.663 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0}
[BeforeEach] [sig-windows] Windows volume mounts 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jun  5 01:01:10.896: INFO: Only supported for node OS distro [windows] (not debian)
[AfterEach] [sig-windows] Windows volume mounts 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 62 lines ...
Jun  5 01:01:02.930: INFO: PersistentVolumeClaim pvc-fkv26 found but phase is Pending instead of Bound.
Jun  5 01:01:04.992: INFO: PersistentVolumeClaim pvc-fkv26 found and phase=Bound (4.171641829s)
Jun  5 01:01:04.992: INFO: Waiting up to 3m0s for PersistentVolume local-8snbm to have phase Bound
Jun  5 01:01:05.048: INFO: PersistentVolume local-8snbm found and phase=Bound (55.448713ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kkvz
STEP: Creating a pod to test subpath
Jun  5 01:01:05.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kkvz" in namespace "provisioning-4770" to be "Succeeded or Failed"
Jun  5 01:01:05.281: INFO: Pod "pod-subpath-test-preprovisionedpv-kkvz": Phase="Pending", Reason="", readiness=false. Elapsed: 52.211353ms
Jun  5 01:01:07.331: INFO: Pod "pod-subpath-test-preprovisionedpv-kkvz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102759233s
Jun  5 01:01:09.384: INFO: Pod "pod-subpath-test-preprovisionedpv-kkvz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155757565s
Jun  5 01:01:11.435: INFO: Pod "pod-subpath-test-preprovisionedpv-kkvz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206965252s
STEP: Saw pod success
Jun  5 01:01:11.435: INFO: Pod "pod-subpath-test-preprovisionedpv-kkvz" satisfied condition "Succeeded or Failed"
Jun  5 01:01:11.485: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-kkvz container test-container-subpath-preprovisionedpv-kkvz: <nil>
STEP: delete the pod
Jun  5 01:01:11.596: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kkvz to disappear
Jun  5 01:01:11.647: INFO: Pod pod-subpath-test-preprovisionedpv-kkvz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kkvz
Jun  5 01:01:11.647: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kkvz" in namespace "provisioning-4770"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":68,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 46 lines ...
Jun  5 01:01:02.926: INFO: PersistentVolumeClaim pvc-jrfx8 found but phase is Pending instead of Bound.
Jun  5 01:01:04.984: INFO: PersistentVolumeClaim pvc-jrfx8 found and phase=Bound (14.430698883s)
Jun  5 01:01:04.985: INFO: Waiting up to 3m0s for PersistentVolume local-clrqj to have phase Bound
Jun  5 01:01:05.037: INFO: PersistentVolume local-clrqj found and phase=Bound (52.146556ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-t6wh
STEP: Creating a pod to test subpath
Jun  5 01:01:05.228: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-t6wh" in namespace "provisioning-2068" to be "Succeeded or Failed"
Jun  5 01:01:05.284: INFO: Pod "pod-subpath-test-preprovisionedpv-t6wh": Phase="Pending", Reason="", readiness=false. Elapsed: 55.723392ms
Jun  5 01:01:07.336: INFO: Pod "pod-subpath-test-preprovisionedpv-t6wh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107369782s
Jun  5 01:01:09.387: INFO: Pod "pod-subpath-test-preprovisionedpv-t6wh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158821886s
Jun  5 01:01:11.440: INFO: Pod "pod-subpath-test-preprovisionedpv-t6wh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211475913s
Jun  5 01:01:13.492: INFO: Pod "pod-subpath-test-preprovisionedpv-t6wh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.26337833s
STEP: Saw pod success
Jun  5 01:01:13.492: INFO: Pod "pod-subpath-test-preprovisionedpv-t6wh" satisfied condition "Succeeded or Failed"
Jun  5 01:01:13.543: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-t6wh container test-container-subpath-preprovisionedpv-t6wh: <nil>
STEP: delete the pod
Jun  5 01:01:13.661: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-t6wh to disappear
Jun  5 01:01:13.719: INFO: Pod pod-subpath-test-preprovisionedpv-t6wh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-t6wh
Jun  5 01:01:13.719: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-t6wh" in namespace "provisioning-2068"
... skipping 74 lines ...
Jun  5 01:00:23.930: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-9vgs5] to have phase Bound
Jun  5 01:00:23.981: INFO: PersistentVolumeClaim pvc-9vgs5 found and phase=Bound (50.944975ms)
STEP: Deleting the previously created pod
Jun  5 01:00:44.243: INFO: Deleting pod "pvc-volume-tester-5srp5" in namespace "csi-mock-volumes-214"
Jun  5 01:00:44.297: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5srp5" to be fully deleted
STEP: Checking CSI driver logs
Jun  5 01:00:56.455: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/102b2159-9591-44c2-a99b-07c92c0a9596/volumes/kubernetes.io~csi/pvc-f9e1853b-6c49-46fc-a3df-729f2eaae0ca/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-5srp5
Jun  5 01:00:56.455: INFO: Deleting pod "pvc-volume-tester-5srp5" in namespace "csi-mock-volumes-214"
STEP: Deleting claim pvc-9vgs5
Jun  5 01:00:56.611: INFO: Waiting up to 2m0s for PersistentVolume pvc-f9e1853b-6c49-46fc-a3df-729f2eaae0ca to get deleted
Jun  5 01:00:56.668: INFO: PersistentVolume pvc-f9e1853b-6c49-46fc-a3df-729f2eaae0ca found and phase=Released (56.488908ms)
Jun  5 01:00:58.719: INFO: PersistentVolume pvc-f9e1853b-6c49-46fc-a3df-729f2eaae0ca was removed
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:437
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:487
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:14.827: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 33 lines ...
Jun  5 01:01:03.487: INFO: PersistentVolumeClaim pvc-znxrm found but phase is Pending instead of Bound.
Jun  5 01:01:05.538: INFO: PersistentVolumeClaim pvc-znxrm found and phase=Bound (2.11256028s)
Jun  5 01:01:05.538: INFO: Waiting up to 3m0s for PersistentVolume local-vc56s to have phase Bound
Jun  5 01:01:05.589: INFO: PersistentVolume local-vc56s found and phase=Bound (50.654011ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4cxh
STEP: Creating a pod to test subpath
Jun  5 01:01:05.749: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4cxh" in namespace "provisioning-7533" to be "Succeeded or Failed"
Jun  5 01:01:05.801: INFO: Pod "pod-subpath-test-preprovisionedpv-4cxh": Phase="Pending", Reason="", readiness=false. Elapsed: 51.061488ms
Jun  5 01:01:07.852: INFO: Pod "pod-subpath-test-preprovisionedpv-4cxh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102416974s
Jun  5 01:01:09.903: INFO: Pod "pod-subpath-test-preprovisionedpv-4cxh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153665981s
Jun  5 01:01:11.958: INFO: Pod "pod-subpath-test-preprovisionedpv-4cxh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208799438s
Jun  5 01:01:14.010: INFO: Pod "pod-subpath-test-preprovisionedpv-4cxh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.260298612s
STEP: Saw pod success
Jun  5 01:01:14.010: INFO: Pod "pod-subpath-test-preprovisionedpv-4cxh" satisfied condition "Succeeded or Failed"
Jun  5 01:01:14.061: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-4cxh container test-container-volume-preprovisionedpv-4cxh: <nil>
STEP: delete the pod
Jun  5 01:01:14.175: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4cxh to disappear
Jun  5 01:01:14.226: INFO: Pod pod-subpath-test-preprovisionedpv-4cxh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4cxh
Jun  5 01:01:14.226: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4cxh" in namespace "provisioning-7533"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:15.043: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 92 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 27 lines ...
Jun  5 01:01:03.607: INFO: PersistentVolumeClaim pvc-bzf9r found but phase is Pending instead of Bound.
Jun  5 01:01:05.662: INFO: PersistentVolumeClaim pvc-bzf9r found and phase=Bound (16.555353214s)
Jun  5 01:01:05.662: INFO: Waiting up to 3m0s for PersistentVolume local-h45lk to have phase Bound
Jun  5 01:01:05.716: INFO: PersistentVolume local-h45lk found and phase=Bound (53.49087ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tdwt
STEP: Creating a pod to test subpath
Jun  5 01:01:05.879: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tdwt" in namespace "provisioning-708" to be "Succeeded or Failed"
Jun  5 01:01:05.931: INFO: Pod "pod-subpath-test-preprovisionedpv-tdwt": Phase="Pending", Reason="", readiness=false. Elapsed: 51.50922ms
Jun  5 01:01:07.987: INFO: Pod "pod-subpath-test-preprovisionedpv-tdwt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108167172s
Jun  5 01:01:10.039: INFO: Pod "pod-subpath-test-preprovisionedpv-tdwt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160348283s
Jun  5 01:01:12.092: INFO: Pod "pod-subpath-test-preprovisionedpv-tdwt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212492009s
Jun  5 01:01:14.144: INFO: Pod "pod-subpath-test-preprovisionedpv-tdwt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.26459118s
STEP: Saw pod success
Jun  5 01:01:14.144: INFO: Pod "pod-subpath-test-preprovisionedpv-tdwt" satisfied condition "Succeeded or Failed"
Jun  5 01:01:14.195: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-tdwt container test-container-volume-preprovisionedpv-tdwt: <nil>
STEP: delete the pod
Jun  5 01:01:14.307: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tdwt to disappear
Jun  5 01:01:14.358: INFO: Pod pod-subpath-test-preprovisionedpv-tdwt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tdwt
Jun  5 01:01:14.358: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tdwt" in namespace "provisioning-708"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":30,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:35
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:01:14.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sysctl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:183
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:01:16.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-8398" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:17.004: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 41 lines ...
Jun  5 01:00:47.956: INFO: PersistentVolumeClaim pvc-k2zmg found but phase is Pending instead of Bound.
Jun  5 01:00:50.019: INFO: PersistentVolumeClaim pvc-k2zmg found and phase=Bound (8.273541762s)
Jun  5 01:00:50.019: INFO: Waiting up to 3m0s for PersistentVolume local-7s8pp to have phase Bound
Jun  5 01:00:50.098: INFO: PersistentVolume local-7s8pp found and phase=Bound (78.780949ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-hhnc
STEP: Creating a pod to test atomic-volume-subpath
Jun  5 01:00:50.316: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-hhnc" in namespace "provisioning-9378" to be "Succeeded or Failed"
Jun  5 01:00:50.369: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.642704ms
Jun  5 01:00:52.424: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1078139s
Jun  5 01:00:54.479: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16316448s
Jun  5 01:00:56.533: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Running", Reason="", readiness=true. Elapsed: 6.217035216s
Jun  5 01:00:58.586: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Running", Reason="", readiness=true. Elapsed: 8.269411585s
Jun  5 01:01:00.645: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Running", Reason="", readiness=true. Elapsed: 10.328489978s
... skipping 2 lines ...
Jun  5 01:01:06.808: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Running", Reason="", readiness=true. Elapsed: 16.491824883s
Jun  5 01:01:08.861: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Running", Reason="", readiness=true. Elapsed: 18.544538746s
Jun  5 01:01:10.913: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Running", Reason="", readiness=true. Elapsed: 20.597279056s
Jun  5 01:01:12.966: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Running", Reason="", readiness=true. Elapsed: 22.649695853s
Jun  5 01:01:15.019: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.702311978s
STEP: Saw pod success
Jun  5 01:01:15.019: INFO: Pod "pod-subpath-test-preprovisionedpv-hhnc" satisfied condition "Succeeded or Failed"
Jun  5 01:01:15.071: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-hhnc container test-container-subpath-preprovisionedpv-hhnc: <nil>
STEP: delete the pod
Jun  5 01:01:15.185: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-hhnc to disappear
Jun  5 01:01:15.237: INFO: Pod pod-subpath-test-preprovisionedpv-hhnc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-hhnc
Jun  5 01:01:15.238: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-hhnc" in namespace "provisioning-9378"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 62 lines ...
Jun  5 01:00:39.978: INFO: PersistentVolumeClaim csi-hostpathncc84 found but phase is Pending instead of Bound.
Jun  5 01:00:42.031: INFO: PersistentVolumeClaim csi-hostpathncc84 found but phase is Pending instead of Bound.
Jun  5 01:00:44.084: INFO: PersistentVolumeClaim csi-hostpathncc84 found but phase is Pending instead of Bound.
Jun  5 01:00:46.136: INFO: PersistentVolumeClaim csi-hostpathncc84 found and phase=Bound (26.736674716s)
STEP: Creating pod pod-subpath-test-dynamicpv-9w24
STEP: Creating a pod to test subpath
Jun  5 01:00:46.293: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9w24" in namespace "provisioning-1993" to be "Succeeded or Failed"
Jun  5 01:00:46.345: INFO: Pod "pod-subpath-test-dynamicpv-9w24": Phase="Pending", Reason="", readiness=false. Elapsed: 52.102689ms
Jun  5 01:00:48.398: INFO: Pod "pod-subpath-test-dynamicpv-9w24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104370967s
Jun  5 01:00:50.450: INFO: Pod "pod-subpath-test-dynamicpv-9w24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156559188s
Jun  5 01:00:52.505: INFO: Pod "pod-subpath-test-dynamicpv-9w24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212151543s
STEP: Saw pod success
Jun  5 01:00:52.505: INFO: Pod "pod-subpath-test-dynamicpv-9w24" satisfied condition "Succeeded or Failed"
Jun  5 01:00:52.558: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-9w24 container test-container-subpath-dynamicpv-9w24: <nil>
STEP: delete the pod
Jun  5 01:00:52.688: INFO: Waiting for pod pod-subpath-test-dynamicpv-9w24 to disappear
Jun  5 01:00:52.740: INFO: Pod pod-subpath-test-dynamicpv-9w24 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9w24
Jun  5 01:00:52.740: INFO: Deleting pod "pod-subpath-test-dynamicpv-9w24" in namespace "provisioning-1993"
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:18.013: INFO: Only supported for providers [azure] (not aws)
... skipping 150 lines ...
• [SLOW TEST:15.091 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:01:10.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun  5 01:01:11.141: INFO: Waiting up to 5m0s for pod "pod-7c322197-11d5-4767-bc43-70f279b2339c" in namespace "emptydir-8082" to be "Succeeded or Failed"
Jun  5 01:01:11.193: INFO: Pod "pod-7c322197-11d5-4767-bc43-70f279b2339c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.839306ms
Jun  5 01:01:13.246: INFO: Pod "pod-7c322197-11d5-4767-bc43-70f279b2339c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104866341s
Jun  5 01:01:15.300: INFO: Pod "pod-7c322197-11d5-4767-bc43-70f279b2339c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158870379s
Jun  5 01:01:17.352: INFO: Pod "pod-7c322197-11d5-4767-bc43-70f279b2339c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211136312s
Jun  5 01:01:19.411: INFO: Pod "pod-7c322197-11d5-4767-bc43-70f279b2339c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.270485379s
STEP: Saw pod success
Jun  5 01:01:19.411: INFO: Pod "pod-7c322197-11d5-4767-bc43-70f279b2339c" satisfied condition "Succeeded or Failed"
Jun  5 01:01:19.488: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-7c322197-11d5-4767-bc43-70f279b2339c container test-container: <nil>
STEP: delete the pod
Jun  5 01:01:19.625: INFO: Waiting for pod pod-7c322197-11d5-4767-bc43-70f279b2339c to disappear
Jun  5 01:01:19.680: INFO: Pod pod-7c322197-11d5-4767-bc43-70f279b2339c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.962 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:25.336 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:20.262: INFO: Only supported for providers [gce gke] (not aws)
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jun  5 01:01:15.150: INFO: Waiting up to 5m0s for pod "pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d" in namespace "emptydir-5631" to be "Succeeded or Failed"
Jun  5 01:01:15.201: INFO: Pod "pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.299752ms
Jun  5 01:01:17.253: INFO: Pod "pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103148265s
Jun  5 01:01:19.331: INFO: Pod "pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180748653s
Jun  5 01:01:21.383: INFO: Pod "pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.232827685s
STEP: Saw pod success
Jun  5 01:01:21.383: INFO: Pod "pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d" satisfied condition "Succeeded or Failed"
Jun  5 01:01:21.435: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d container test-container: <nil>
STEP: delete the pod
Jun  5 01:01:21.548: INFO: Waiting for pod pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d to disappear
Jun  5 01:01:21.599: INFO: Pod pod-ffcd8072-5eae-4ee4-8271-32ffbfcb7d8d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":2,"skipped":1,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:21.748: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:21.988: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-0bc7b3a8-00d1-4306-8114-83e58a95977c
STEP: Creating a pod to test consume configMaps
Jun  5 01:01:15.538: INFO: Waiting up to 5m0s for pod "pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b" in namespace "configmap-5957" to be "Succeeded or Failed"
Jun  5 01:01:15.589: INFO: Pod "pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.081197ms
Jun  5 01:01:17.640: INFO: Pod "pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102460611s
Jun  5 01:01:19.697: INFO: Pod "pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158754124s
Jun  5 01:01:21.748: INFO: Pod "pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210682764s
STEP: Saw pod success
Jun  5 01:01:21.749: INFO: Pod "pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b" satisfied condition "Succeeded or Failed"
Jun  5 01:01:21.800: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:01:21.915: INFO: Waiting for pod pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b to disappear
Jun  5 01:01:21.966: INFO: Pod pod-configmaps-216312ce-767b-489d-a82e-dec903513a8b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.899 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:22.095: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 14 lines ...
      Driver aws doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:178
------------------------------
S
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 00:59:56.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 99 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:666
    should expand volume without restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:681
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":3,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:22.480: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 143 lines ...
Jun  5 01:00:13.910: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-6512-aws-sc7hl8x
STEP: creating a claim
Jun  5 01:00:13.962: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-5zqz
STEP: Creating a pod to test subpath
Jun  5 01:00:14.122: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5zqz" in namespace "provisioning-6512" to be "Succeeded or Failed"
Jun  5 01:00:14.173: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 51.177829ms
Jun  5 01:00:16.230: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107311026s
Jun  5 01:00:18.281: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159110101s
Jun  5 01:00:20.333: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21089457s
Jun  5 01:00:22.385: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.262466817s
Jun  5 01:00:24.437: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.314385465s
Jun  5 01:00:26.489: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.366992179s
Jun  5 01:00:28.541: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.418879159s
Jun  5 01:00:30.593: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.470687091s
Jun  5 01:00:32.645: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.522654887s
Jun  5 01:00:34.696: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.574104204s
Jun  5 01:00:36.762: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.640017856s
STEP: Saw pod success
Jun  5 01:00:36.762: INFO: Pod "pod-subpath-test-dynamicpv-5zqz" satisfied condition "Succeeded or Failed"
Jun  5 01:00:36.817: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-5zqz container test-container-subpath-dynamicpv-5zqz: <nil>
STEP: delete the pod
Jun  5 01:00:36.934: INFO: Waiting for pod pod-subpath-test-dynamicpv-5zqz to disappear
Jun  5 01:00:36.985: INFO: Pod pod-subpath-test-dynamicpv-5zqz no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-5zqz
Jun  5 01:00:36.985: INFO: Deleting pod "pod-subpath-test-dynamicpv-5zqz" in namespace "provisioning-6512"
STEP: Creating pod pod-subpath-test-dynamicpv-5zqz
STEP: Creating a pod to test subpath
Jun  5 01:00:37.087: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5zqz" in namespace "provisioning-6512" to be "Succeeded or Failed"
Jun  5 01:00:37.138: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 50.970654ms
Jun  5 01:00:39.192: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104315346s
Jun  5 01:00:41.248: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160483189s
Jun  5 01:00:43.300: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21235579s
Jun  5 01:00:45.351: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263931232s
Jun  5 01:00:47.403: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.315597455s
... skipping 2 lines ...
Jun  5 01:00:53.582: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.494251389s
Jun  5 01:00:55.633: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 18.545998459s
Jun  5 01:00:57.685: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 20.597931194s
Jun  5 01:00:59.744: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Pending", Reason="", readiness=false. Elapsed: 22.656126271s
Jun  5 01:01:01.817: INFO: Pod "pod-subpath-test-dynamicpv-5zqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.729262031s
STEP: Saw pod success
Jun  5 01:01:01.817: INFO: Pod "pod-subpath-test-dynamicpv-5zqz" satisfied condition "Succeeded or Failed"
Jun  5 01:01:01.870: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-5zqz container test-container-subpath-dynamicpv-5zqz: <nil>
STEP: delete the pod
Jun  5 01:01:02.016: INFO: Waiting for pod pod-subpath-test-dynamicpv-5zqz to disappear
Jun  5 01:01:02.068: INFO: Pod pod-subpath-test-dynamicpv-5zqz no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-5zqz
Jun  5 01:01:02.068: INFO: Deleting pod "pod-subpath-test-dynamicpv-5zqz" in namespace "provisioning-6512"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:01:22.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-988" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:22.828: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 67 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Jun  5 01:01:20.163: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085" in namespace "downward-api-7393" to be "Succeeded or Failed"
Jun  5 01:01:20.218: INFO: Pod "downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085": Phase="Pending", Reason="", readiness=false. Elapsed: 54.732823ms
Jun  5 01:01:22.270: INFO: Pod "downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107173523s
Jun  5 01:01:24.323: INFO: Pod "downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159883485s
Jun  5 01:01:26.376: INFO: Pod "downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212350485s
STEP: Saw pod success
Jun  5 01:01:26.376: INFO: Pod "downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085" satisfied condition "Succeeded or Failed"
Jun  5 01:01:26.430: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085 container client-container: <nil>
STEP: delete the pod
Jun  5 01:01:26.567: INFO: Waiting for pod downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085 to disappear
Jun  5 01:01:26.645: INFO: Pod downwardapi-volume-7c76279a-1287-40ce-aff1-fc4144a76085 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.958 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:26.820: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 265 lines ...
• [SLOW TEST:7.752 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 130 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1410
    should modify fsGroup if fsGroupPolicy=File
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1434
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":5,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 105 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:881
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:934
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:33.781: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 188 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":5,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:01:33.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pvc-protection
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:71
Jun  5 01:01:34.064: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
STEP: Creating a PVC
Jun  5 01:01:34.165: INFO: error finding default storageClass : No default storage class found
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:01:34.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pvc-protection-4307" for this suite.
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:106
... skipping 2 lines ...
S [SKIPPING] in Spec Setup (BeforeEach) [0.461 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:143

  error finding default storageClass : No default storage class found

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:830
------------------------------
SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 914 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:35.581: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
• [SLOW TEST:20.624 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:35.752: INFO: Only supported for providers [openstack] (not aws)
... skipping 68 lines ...
Jun  5 01:01:31.978: INFO: PersistentVolumeClaim pvc-dc7jd found but phase is Pending instead of Bound.
Jun  5 01:01:34.037: INFO: PersistentVolumeClaim pvc-dc7jd found and phase=Bound (10.323696125s)
Jun  5 01:01:34.037: INFO: Waiting up to 3m0s for PersistentVolume local-229fp to have phase Bound
Jun  5 01:01:34.092: INFO: PersistentVolume local-229fp found and phase=Bound (55.22167ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wm9b
STEP: Creating a pod to test subpath
Jun  5 01:01:34.247: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wm9b" in namespace "provisioning-9194" to be "Succeeded or Failed"
Jun  5 01:01:34.297: INFO: Pod "pod-subpath-test-preprovisionedpv-wm9b": Phase="Pending", Reason="", readiness=false. Elapsed: 50.136227ms
Jun  5 01:01:36.360: INFO: Pod "pod-subpath-test-preprovisionedpv-wm9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113139722s
Jun  5 01:01:38.411: INFO: Pod "pod-subpath-test-preprovisionedpv-wm9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163679316s
STEP: Saw pod success
Jun  5 01:01:38.411: INFO: Pod "pod-subpath-test-preprovisionedpv-wm9b" satisfied condition "Succeeded or Failed"
Jun  5 01:01:38.461: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-wm9b container test-container-subpath-preprovisionedpv-wm9b: <nil>
STEP: delete the pod
Jun  5 01:01:38.588: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wm9b to disappear
Jun  5 01:01:38.638: INFO: Pod pod-subpath-test-preprovisionedpv-wm9b no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wm9b
Jun  5 01:01:38.638: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wm9b" in namespace "provisioning-9194"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:40.395: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 72 lines ...
• [SLOW TEST:10.802 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Jun  5 01:01:33.513: INFO: PersistentVolumeClaim pvc-6cthw found but phase is Pending instead of Bound.
Jun  5 01:01:35.570: INFO: PersistentVolumeClaim pvc-6cthw found and phase=Bound (14.424110457s)
Jun  5 01:01:35.570: INFO: Waiting up to 3m0s for PersistentVolume local-ctmbc to have phase Bound
Jun  5 01:01:35.622: INFO: PersistentVolume local-ctmbc found and phase=Bound (52.062748ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vmhd
STEP: Creating a pod to test subpath
Jun  5 01:01:35.907: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vmhd" in namespace "provisioning-1461" to be "Succeeded or Failed"
Jun  5 01:01:35.997: INFO: Pod "pod-subpath-test-preprovisionedpv-vmhd": Phase="Pending", Reason="", readiness=false. Elapsed: 90.174816ms
Jun  5 01:01:38.059: INFO: Pod "pod-subpath-test-preprovisionedpv-vmhd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151654913s
Jun  5 01:01:40.111: INFO: Pod "pod-subpath-test-preprovisionedpv-vmhd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204212227s
Jun  5 01:01:42.168: INFO: Pod "pod-subpath-test-preprovisionedpv-vmhd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260698011s
Jun  5 01:01:44.221: INFO: Pod "pod-subpath-test-preprovisionedpv-vmhd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313591619s
Jun  5 01:01:46.274: INFO: Pod "pod-subpath-test-preprovisionedpv-vmhd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.366992033s
STEP: Saw pod success
Jun  5 01:01:46.274: INFO: Pod "pod-subpath-test-preprovisionedpv-vmhd" satisfied condition "Succeeded or Failed"
Jun  5 01:01:46.334: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vmhd container test-container-subpath-preprovisionedpv-vmhd: <nil>
STEP: delete the pod
Jun  5 01:01:46.456: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vmhd to disappear
Jun  5 01:01:46.509: INFO: Pod pod-subpath-test-preprovisionedpv-vmhd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vmhd
Jun  5 01:01:46.509: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vmhd" in namespace "provisioning-1461"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":29,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:47.414: INFO: Only supported for providers [openstack] (not aws)
... skipping 163 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:49.308: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 192 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1410
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1434
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":4,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:01:50.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1333" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:51.000: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 53 lines ...
• [SLOW TEST:19.829 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:54.175: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 38 lines ...
Jun  5 01:01:19.457: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun  5 01:01:19.512: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfshvfpm] to have phase Bound
Jun  5 01:01:19.564: INFO: PersistentVolumeClaim nfshvfpm found but phase is Pending instead of Bound.
Jun  5 01:01:21.621: INFO: PersistentVolumeClaim nfshvfpm found and phase=Bound (2.108698957s)
STEP: Creating pod pod-subpath-test-dynamicpv-6tm2
STEP: Creating a pod to test subpath
Jun  5 01:01:21.783: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6tm2" in namespace "provisioning-4721" to be "Succeeded or Failed"
Jun  5 01:01:21.835: INFO: Pod "pod-subpath-test-dynamicpv-6tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 52.004522ms
Jun  5 01:01:23.890: INFO: Pod "pod-subpath-test-dynamicpv-6tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107043304s
Jun  5 01:01:25.943: INFO: Pod "pod-subpath-test-dynamicpv-6tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159653536s
Jun  5 01:01:28.000: INFO: Pod "pod-subpath-test-dynamicpv-6tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216427382s
Jun  5 01:01:30.052: INFO: Pod "pod-subpath-test-dynamicpv-6tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268713261s
Jun  5 01:01:32.109: INFO: Pod "pod-subpath-test-dynamicpv-6tm2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325594152s
Jun  5 01:01:34.161: INFO: Pod "pod-subpath-test-dynamicpv-6tm2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.377775947s
STEP: Saw pod success
Jun  5 01:01:34.161: INFO: Pod "pod-subpath-test-dynamicpv-6tm2" satisfied condition "Succeeded or Failed"
Jun  5 01:01:34.213: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-6tm2 container test-container-subpath-dynamicpv-6tm2: <nil>
STEP: delete the pod
Jun  5 01:01:34.475: INFO: Waiting for pod pod-subpath-test-dynamicpv-6tm2 to disappear
Jun  5 01:01:34.553: INFO: Pod pod-subpath-test-dynamicpv-6tm2 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6tm2
Jun  5 01:01:34.553: INFO: Deleting pod "pod-subpath-test-dynamicpv-6tm2" in namespace "provisioning-4721"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:54.277: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 116 lines ...
• [SLOW TEST:21.259 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":10,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:01:55.610: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
... skipping 52 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
... skipping 52 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-22c41b77-d9bc-4f89-8f09-f22872fa9bcd
STEP: Creating a pod to test consume configMaps
Jun  5 01:01:51.394: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14" in namespace "projected-3001" to be "Succeeded or Failed"
Jun  5 01:01:51.446: INFO: Pod "pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14": Phase="Pending", Reason="", readiness=false. Elapsed: 52.394589ms
Jun  5 01:01:53.499: INFO: Pod "pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104751821s
Jun  5 01:01:55.551: INFO: Pod "pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157020789s
STEP: Saw pod success
Jun  5 01:01:55.551: INFO: Pod "pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14" satisfied condition "Succeeded or Failed"
Jun  5 01:01:55.603: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:01:55.716: INFO: Waiting for pod pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14 to disappear
Jun  5 01:01:55.768: INFO: Pod pod-projected-configmaps-02b8e47f-45bf-4171-a0ce-779482ff1b14 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 5 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:01:54.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:139
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:01:56.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 183 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:241
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:03.504: INFO: Distro debian doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376

      Distro debian doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:184
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":7,"skipped":70,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:01:56.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name projected-secret-test-ef332a3c-2db8-450f-819d-4438bc920ffe
STEP: Creating a pod to test consume secrets
Jun  5 01:01:57.056: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53" in namespace "projected-3907" to be "Succeeded or Failed"
Jun  5 01:01:57.108: INFO: Pod "pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53": Phase="Pending", Reason="", readiness=false. Elapsed: 51.95174ms
Jun  5 01:01:59.160: INFO: Pod "pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104202536s
Jun  5 01:02:01.217: INFO: Pod "pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160555981s
Jun  5 01:02:03.270: INFO: Pod "pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.213434118s
STEP: Saw pod success
Jun  5 01:02:03.270: INFO: Pod "pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53" satisfied condition "Succeeded or Failed"
Jun  5 01:02:03.322: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53 container secret-volume-test: <nil>
STEP: delete the pod
Jun  5 01:02:03.442: INFO: Waiting for pod pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53 to disappear
Jun  5 01:02:03.494: INFO: Pod pod-projected-secrets-5cbe4dfa-2d2d-4d26-9fa6-84bc95adcc53 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.922 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:03.616: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441

      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:01:55.884: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
Jun  5 01:01:56.146: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun  5 01:01:56.199: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-pclj
STEP: Creating a pod to test subpath
Jun  5 01:01:56.254: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-pclj" in namespace "provisioning-2955" to be "Succeeded or Failed"
Jun  5 01:01:56.310: INFO: Pod "pod-subpath-test-inlinevolume-pclj": Phase="Pending", Reason="", readiness=false. Elapsed: 55.751875ms
Jun  5 01:01:58.389: INFO: Pod "pod-subpath-test-inlinevolume-pclj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134435474s
Jun  5 01:02:00.441: INFO: Pod "pod-subpath-test-inlinevolume-pclj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186813753s
Jun  5 01:02:02.494: INFO: Pod "pod-subpath-test-inlinevolume-pclj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239191453s
Jun  5 01:02:04.546: INFO: Pod "pod-subpath-test-inlinevolume-pclj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.291647217s
STEP: Saw pod success
Jun  5 01:02:04.546: INFO: Pod "pod-subpath-test-inlinevolume-pclj" satisfied condition "Succeeded or Failed"
Jun  5 01:02:04.599: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-pclj container test-container-volume-inlinevolume-pclj: <nil>
STEP: delete the pod
Jun  5 01:02:04.713: INFO: Waiting for pod pod-subpath-test-inlinevolume-pclj to disappear
Jun  5 01:02:04.765: INFO: Pod pod-subpath-test-inlinevolume-pclj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-pclj
Jun  5 01:02:04.765: INFO: Deleting pod "pod-subpath-test-inlinevolume-pclj" in namespace "provisioning-2955"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:05.001: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 268 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":47,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
Jun  5 01:01:33.474: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n"
Jun  5 01:01:33.474: INFO: stdout: "service-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9827
STEP: Deleting pod verify-service-up-exec-pod-kb5fx in namespace services-9827
STEP: verifying service-disabled is not up
Jun  5 01:01:33.599: INFO: Creating new host exec pod
Jun  5 01:01:35.873: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9827 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.191.189:80 && echo service-down-failed'
Jun  5 01:01:38.791: INFO: rc: 28
Jun  5 01:01:38.791: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.191.189:80 && echo service-down-failed" in pod services-9827/verify-service-down-host-exec-pod: error running /tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9827 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.191.189:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.191.189:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9827
STEP: adding service-proxy-name label
STEP: verifying service is not up
Jun  5 01:01:38.961: INFO: Creating new host exec pod
Jun  5 01:01:45.120: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9827 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.0.164:80 && echo service-down-failed'
Jun  5 01:01:47.790: INFO: rc: 28
Jun  5 01:01:47.790: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.0.164:80 && echo service-down-failed" in pod services-9827/verify-service-down-host-exec-pod: error running /tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9827 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.0.164:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.0.164:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9827
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Jun  5 01:01:47.957: INFO: Creating new host exec pod
... skipping 8 lines ...
Jun  5 01:01:58.159: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.0.164:80\n+ echo\n"
Jun  5 01:01:58.159: INFO: stdout: "service-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-2v8pj\nservice-proxy-toggled-54zwg\nservice-proxy-toggled-jczhf\nservice-proxy-toggled-54zwg\n"
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-9827
STEP: Deleting pod verify-service-up-exec-pod-kbhft in namespace services-9827
STEP: verifying service-disabled is still not up
Jun  5 01:01:58.376: INFO: Creating new host exec pod
Jun  5 01:02:04.533: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9827 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.191.189:80 && echo service-down-failed'
Jun  5 01:02:07.297: INFO: rc: 28
Jun  5 01:02:07.297: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.191.189:80 && echo service-down-failed" in pod services-9827/verify-service-down-host-exec-pod: error running /tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9827 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.191.189:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.191.189:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-9827
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:07.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:56.506 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2536
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":2,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:07.482: INFO: Only supported for providers [azure] (not aws)
... skipping 70 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Jun  5 01:02:03.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71" in namespace "projected-7334" to be "Succeeded or Failed"
Jun  5 01:02:03.882: INFO: Pod "downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71": Phase="Pending", Reason="", readiness=false. Elapsed: 53.812538ms
Jun  5 01:02:05.934: INFO: Pod "downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105898425s
Jun  5 01:02:07.987: INFO: Pod "downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158063647s
STEP: Saw pod success
Jun  5 01:02:07.987: INFO: Pod "downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71" satisfied condition "Succeeded or Failed"
Jun  5 01:02:08.039: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71 container client-container: <nil>
STEP: delete the pod
Jun  5 01:02:08.150: INFO: Waiting for pod downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71 to disappear
Jun  5 01:02:08.202: INFO: Pod downwardapi-volume-edbf4fef-fa26-4c58-9e98-fb92da45ff71 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:08.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7334" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 116 lines ...
Jun  5 01:02:02.626: INFO: PersistentVolumeClaim pvc-chnxg found but phase is Pending instead of Bound.
Jun  5 01:02:04.678: INFO: PersistentVolumeClaim pvc-chnxg found and phase=Bound (10.309356134s)
Jun  5 01:02:04.678: INFO: Waiting up to 3m0s for PersistentVolume local-4mjd6 to have phase Bound
Jun  5 01:02:04.730: INFO: PersistentVolume local-4mjd6 found and phase=Bound (52.372302ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7d64
STEP: Creating a pod to test subpath
Jun  5 01:02:04.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7d64" in namespace "provisioning-7312" to be "Succeeded or Failed"
Jun  5 01:02:04.944: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64": Phase="Pending", Reason="", readiness=false. Elapsed: 51.312975ms
Jun  5 01:02:06.996: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102831368s
Jun  5 01:02:09.048: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154700934s
Jun  5 01:02:11.100: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206806182s
STEP: Saw pod success
Jun  5 01:02:11.100: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64" satisfied condition "Succeeded or Failed"
Jun  5 01:02:11.151: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7d64 container test-container-subpath-preprovisionedpv-7d64: <nil>
STEP: delete the pod
Jun  5 01:02:11.264: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7d64 to disappear
Jun  5 01:02:11.316: INFO: Pod pod-subpath-test-preprovisionedpv-7d64 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7d64
Jun  5 01:02:11.316: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7d64" in namespace "provisioning-7312"
STEP: Creating pod pod-subpath-test-preprovisionedpv-7d64
STEP: Creating a pod to test subpath
Jun  5 01:02:11.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7d64" in namespace "provisioning-7312" to be "Succeeded or Failed"
Jun  5 01:02:11.470: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64": Phase="Pending", Reason="", readiness=false. Elapsed: 51.347578ms
Jun  5 01:02:13.522: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103242238s
STEP: Saw pod success
Jun  5 01:02:13.522: INFO: Pod "pod-subpath-test-preprovisionedpv-7d64" satisfied condition "Succeeded or Failed"
Jun  5 01:02:13.574: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-7d64 container test-container-subpath-preprovisionedpv-7d64: <nil>
STEP: delete the pod
Jun  5 01:02:13.688: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7d64 to disappear
Jun  5 01:02:13.739: INFO: Pod pod-subpath-test-preprovisionedpv-7d64 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7d64
Jun  5 01:02:13.739: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7d64" in namespace "provisioning-7312"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [sig-windows] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jun  5 01:02:14.533: INFO: Only supported for node OS distro [windows] (not debian)
[AfterEach] [sig-windows] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:15.549: INFO: Only supported for providers [azure] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":9,"skipped":55,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:07.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:8.813 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:16.358: INFO: >>> kubeConfig: /root/.kube/config
[It] watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/protocol.go:46
Jun  5 01:02:16.359: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:16.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":11,"skipped":55,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:16.550: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:17.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8709" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":12,"skipped":70,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:17.335: INFO: Only supported for providers [gce gke] (not aws)
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:18.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5677" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":4,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:19.093: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 170 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":4,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:21.338: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":5,"skipped":8,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:13.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
• [SLOW TEST:8.343 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":6,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:21.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3682" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:22.052: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  [k8s.io] Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed","total":-1,"completed":4,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:22.620: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":5,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:23.419: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 97 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:00.208: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Jun  5 01:02:17.526: INFO: PersistentVolumeClaim pvc-dpfhw found but phase is Pending instead of Bound.
Jun  5 01:02:19.579: INFO: PersistentVolumeClaim pvc-dpfhw found and phase=Bound (12.374514198s)
Jun  5 01:02:19.579: INFO: Waiting up to 3m0s for PersistentVolume local-9qvr7 to have phase Bound
Jun  5 01:02:19.631: INFO: PersistentVolume local-9qvr7 found and phase=Bound (51.802417ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n7nr
STEP: Creating a pod to test subpath
Jun  5 01:02:19.795: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n7nr" in namespace "provisioning-8093" to be "Succeeded or Failed"
Jun  5 01:02:19.847: INFO: Pod "pod-subpath-test-preprovisionedpv-n7nr": Phase="Pending", Reason="", readiness=false. Elapsed: 52.289506ms
Jun  5 01:02:21.899: INFO: Pod "pod-subpath-test-preprovisionedpv-n7nr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10468904s
Jun  5 01:02:23.954: INFO: Pod "pod-subpath-test-preprovisionedpv-n7nr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159155656s
STEP: Saw pod success
Jun  5 01:02:23.954: INFO: Pod "pod-subpath-test-preprovisionedpv-n7nr" satisfied condition "Succeeded or Failed"
Jun  5 01:02:24.006: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-n7nr container test-container-subpath-preprovisionedpv-n7nr: <nil>
STEP: delete the pod
Jun  5 01:02:24.125: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n7nr to disappear
Jun  5 01:02:24.179: INFO: Pod pod-subpath-test-preprovisionedpv-n7nr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n7nr
Jun  5 01:02:24.179: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n7nr" in namespace "provisioning-8093"
... skipping 46 lines ...
• [SLOW TEST:36.364 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:167
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:27.374: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 142 lines ...
STEP: Creating a kubernetes client
Jun  5 01:01:40.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: creating the pod
Jun  5 01:01:40.696: INFO: PodSpec: initContainers in spec.initContainers
Jun  5 01:02:28.666: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1adc3607-eeab-46ca-b541-f28655329a53", GenerateName:"", Namespace:"init-container-7816", SelfLink:"", UID:"c24e9af2-d9bf-4c19-83c3-b260b6dd0a0f", ResourceVersion:"12210", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63758451700, loc:(*time.Location)(0x7977f00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"696318519"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003038200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003038220)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003038240), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003038260)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bs6rt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0027c2140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bs6rt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bs6rt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bs6rt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0021862b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-52-198.us-west-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e321c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002186330)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002186350)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002186358), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00218635c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001d0c170), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451700, loc:(*time.Location)(0x7977f00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451700, loc:(*time.Location)(0x7977f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451700, loc:(*time.Location)(0x7977f00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63758451700, loc:(*time.Location)(0x7977f00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.52.198", PodIP:"100.96.2.135", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.2.135"}}, StartTime:(*v1.Time)(0xc003038280), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e322a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e32310)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://987018cc419da98dee490a26d53a98dfd683ffaa9b91cfa65abb1b368966316b", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030382c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0030382a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0021863df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:28.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7816" for this suite.


• [SLOW TEST:48.329 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":7,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:28.789: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 161 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:169
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":4,"skipped":68,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:34.706: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 47 lines ...
[It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:912
Jun  5 01:02:24.600: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1641 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Jun  5 01:02:25.297: INFO: rc: 7
Jun  5 01:02:25.356: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Jun  5 01:02:25.407: INFO: Pod kube-proxy-mode-detector no longer exists
Jun  5 01:02:25.408: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1641 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating a TCP service sourceip-test with type=ClusterIP in namespace services-1641
Jun  5 01:02:25.465: INFO: sourceip-test cluster ip: 100.68.225.220
STEP: Picking 2 Nodes to test whether source IP is preserved or not
STEP: Creating a webserver pod to be part of the TCP service which echoes back source ip
Jun  5 01:02:25.622: INFO: The status of Pod echo-sourceip is Pending, waiting for it to be Running (with Ready = true)
... skipping 28 lines ...
• [SLOW TEST:13.653 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:912
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":7,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:28.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
• [SLOW TEST:8.803 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":5,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:112
Jun  5 01:02:37.131: INFO: Driver "nfs" does not support block volumes - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
... skipping 115 lines ...
STEP: stopping service up-down-1
STEP: deleting ReplicationController up-down-1 in namespace services-3389, will wait for the garbage collector to delete the pods
Jun  5 01:01:48.469: INFO: Deleting ReplicationController up-down-1 took: 53.274383ms
Jun  5 01:01:48.570: INFO: Terminating ReplicationController up-down-1 pods took: 100.279206ms
STEP: verifying service up-down-1 is not up
Jun  5 01:02:04.733: INFO: Creating new host exec pod
Jun  5 01:02:08.910: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3389 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.77.140:80 && echo service-down-failed'
Jun  5 01:02:11.611: INFO: rc: 28
Jun  5 01:02:11.611: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.68.77.140:80 && echo service-down-failed" in pod services-3389/verify-service-down-host-exec-pod: error running /tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3389 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.68.77.140:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.68.77.140:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-3389
STEP: verifying service up-down-2 is still up
Jun  5 01:02:11.670: INFO: Creating new host exec pod
Jun  5 01:02:13.828: INFO: Creating new exec pod
... skipping 53 lines ...
• [SLOW TEST:79.574 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1025
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":5,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:37.630: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 99 lines ...
      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:101
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":8,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:38.014: INFO: >>> kubeConfig: /root/.kube/config
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:38.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-9309" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":6,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:38.869: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 272 lines ...
Jun  5 01:02:32.755: INFO: PersistentVolumeClaim pvc-wzgpp found but phase is Pending instead of Bound.
Jun  5 01:02:34.807: INFO: PersistentVolumeClaim pvc-wzgpp found and phase=Bound (10.320594529s)
Jun  5 01:02:34.807: INFO: Waiting up to 3m0s for PersistentVolume local-g8p6h to have phase Bound
Jun  5 01:02:34.859: INFO: PersistentVolume local-g8p6h found and phase=Bound (51.814842ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bds8
STEP: Creating a pod to test subpath
Jun  5 01:02:35.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bds8" in namespace "provisioning-7170" to be "Succeeded or Failed"
Jun  5 01:02:35.075: INFO: Pod "pod-subpath-test-preprovisionedpv-bds8": Phase="Pending", Reason="", readiness=false. Elapsed: 51.848329ms
Jun  5 01:02:37.129: INFO: Pod "pod-subpath-test-preprovisionedpv-bds8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106407958s
Jun  5 01:02:39.185: INFO: Pod "pod-subpath-test-preprovisionedpv-bds8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161892219s
STEP: Saw pod success
Jun  5 01:02:39.185: INFO: Pod "pod-subpath-test-preprovisionedpv-bds8" satisfied condition "Succeeded or Failed"
Jun  5 01:02:39.237: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-bds8 container test-container-subpath-preprovisionedpv-bds8: <nil>
STEP: delete the pod
Jun  5 01:02:39.353: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bds8 to disappear
Jun  5 01:02:39.405: INFO: Pod pod-subpath-test-preprovisionedpv-bds8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bds8
Jun  5 01:02:39.405: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bds8" in namespace "provisioning-7170"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:40.202: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 185 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Jun  5 01:02:40.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0701a29f-10bb-4b93-94cd-c27d7097fde1" in namespace "projected-2941" to be "Succeeded or Failed"
Jun  5 01:02:41.004: INFO: Pod "downwardapi-volume-0701a29f-10bb-4b93-94cd-c27d7097fde1": Phase="Pending", Reason="", readiness=false. Elapsed: 51.99515ms
Jun  5 01:02:43.059: INFO: Pod "downwardapi-volume-0701a29f-10bb-4b93-94cd-c27d7097fde1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.106880903s
STEP: Saw pod success
Jun  5 01:02:43.059: INFO: Pod "downwardapi-volume-0701a29f-10bb-4b93-94cd-c27d7097fde1" satisfied condition "Succeeded or Failed"
Jun  5 01:02:43.121: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod downwardapi-volume-0701a29f-10bb-4b93-94cd-c27d7097fde1 container client-container: <nil>
STEP: delete the pod
Jun  5 01:02:43.243: INFO: Waiting for pod downwardapi-volume-0701a29f-10bb-4b93-94cd-c27d7097fde1 to disappear
Jun  5 01:02:43.297: INFO: Pod downwardapi-volume-0701a29f-10bb-4b93-94cd-c27d7097fde1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:43.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2941" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":22,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":9,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:05.744: INFO: >>> kubeConfig: /root/.kube/config
... skipping 4 lines ...
Jun  5 01:02:06.006: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Jun  5 01:02:06.553: INFO: Successfully created a new PD: "aws://us-west-1a/vol-0d87360c99aab386b".
Jun  5 01:02:06.553: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-6n8s
STEP: Creating a pod to test exec-volume-test
Jun  5 01:02:06.607: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-6n8s" in namespace "volume-562" to be "Succeeded or Failed"
Jun  5 01:02:06.659: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 52.098706ms
Jun  5 01:02:08.712: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104640897s
Jun  5 01:02:10.765: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157393803s
Jun  5 01:02:12.817: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209915169s
Jun  5 01:02:14.877: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.269397534s
Jun  5 01:02:16.929: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.321572822s
Jun  5 01:02:18.981: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.374053883s
Jun  5 01:02:21.034: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.426711393s
Jun  5 01:02:23.087: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.479746667s
Jun  5 01:02:25.139: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Pending", Reason="", readiness=false. Elapsed: 18.532227437s
Jun  5 01:02:27.192: INFO: Pod "exec-volume-test-inlinevolume-6n8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.584496726s
STEP: Saw pod success
Jun  5 01:02:27.192: INFO: Pod "exec-volume-test-inlinevolume-6n8s" satisfied condition "Succeeded or Failed"
Jun  5 01:02:27.244: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod exec-volume-test-inlinevolume-6n8s container exec-container-inlinevolume-6n8s: <nil>
STEP: delete the pod
Jun  5 01:02:27.358: INFO: Waiting for pod exec-volume-test-inlinevolume-6n8s to disappear
Jun  5 01:02:27.410: INFO: Pod exec-volume-test-inlinevolume-6n8s no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-6n8s
Jun  5 01:02:27.410: INFO: Deleting pod "exec-volume-test-inlinevolume-6n8s" in namespace "volume-562"
Jun  5 01:02:27.642: INFO: Couldn't delete PD "aws://us-west-1a/vol-0d87360c99aab386b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d87360c99aab386b is currently attached to i-01bd6548e8f6bd7c1
	status code: 400, request id: decc7860-c2b1-403f-9948-7f506c0011f1
Jun  5 01:02:32.978: INFO: Couldn't delete PD "aws://us-west-1a/vol-0d87360c99aab386b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d87360c99aab386b is currently attached to i-01bd6548e8f6bd7c1
	status code: 400, request id: a3912e1b-4975-483f-a296-a8a6488aeb2c
Jun  5 01:02:38.422: INFO: Couldn't delete PD "aws://us-west-1a/vol-0d87360c99aab386b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d87360c99aab386b is currently attached to i-01bd6548e8f6bd7c1
	status code: 400, request id: 31a5b4e4-fb52-4362-86f3-4a4217fdc6d1
Jun  5 01:02:43.796: INFO: Successfully deleted PD "aws://us-west-1a/vol-0d87360c99aab386b".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:43.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-562" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:43.924: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
Jun  5 01:02:32.779: INFO: PersistentVolumeClaim pvc-t59xd found but phase is Pending instead of Bound.
Jun  5 01:02:34.832: INFO: PersistentVolumeClaim pvc-t59xd found and phase=Bound (10.320096687s)
Jun  5 01:02:34.832: INFO: Waiting up to 3m0s for PersistentVolume local-ccpwn to have phase Bound
Jun  5 01:02:34.883: INFO: PersistentVolume local-ccpwn found and phase=Bound (51.10522ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vh5s
STEP: Creating a pod to test subpath
Jun  5 01:02:35.044: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vh5s" in namespace "provisioning-224" to be "Succeeded or Failed"
Jun  5 01:02:35.095: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s": Phase="Pending", Reason="", readiness=false. Elapsed: 51.722704ms
Jun  5 01:02:37.149: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1054318s
Jun  5 01:02:39.201: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157359885s
Jun  5 01:02:41.253: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.208991213s
STEP: Saw pod success
Jun  5 01:02:41.253: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s" satisfied condition "Succeeded or Failed"
Jun  5 01:02:41.304: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vh5s container test-container-subpath-preprovisionedpv-vh5s: <nil>
STEP: delete the pod
Jun  5 01:02:41.428: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vh5s to disappear
Jun  5 01:02:41.480: INFO: Pod pod-subpath-test-preprovisionedpv-vh5s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vh5s
Jun  5 01:02:41.480: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vh5s" in namespace "provisioning-224"
STEP: Creating pod pod-subpath-test-preprovisionedpv-vh5s
STEP: Creating a pod to test subpath
Jun  5 01:02:41.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vh5s" in namespace "provisioning-224" to be "Succeeded or Failed"
Jun  5 01:02:41.636: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s": Phase="Pending", Reason="", readiness=false. Elapsed: 51.230073ms
Jun  5 01:02:43.688: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102826734s
STEP: Saw pod success
Jun  5 01:02:43.688: INFO: Pod "pod-subpath-test-preprovisionedpv-vh5s" satisfied condition "Succeeded or Failed"
Jun  5 01:02:43.739: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vh5s container test-container-subpath-preprovisionedpv-vh5s: <nil>
STEP: delete the pod
Jun  5 01:02:43.863: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vh5s to disappear
Jun  5 01:02:43.915: INFO: Pod pod-subpath-test-preprovisionedpv-vh5s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vh5s
Jun  5 01:02:43.915: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vh5s" in namespace "provisioning-224"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":50,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:47.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8724" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":6,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:88
Jun  5 01:02:47.917: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
... skipping 318 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:49.819: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 72 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "nfs" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
... skipping 157 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:50.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-8028" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":7,"skipped":59,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:50.624: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":13,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:50.791: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":56,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:44.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 188 lines ...
• [SLOW TEST:57.886 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":88,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:53.591: INFO: Only supported for providers [vsphere] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Jun  5 01:02:50.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea" in namespace "downward-api-9083" to be "Succeeded or Failed"
Jun  5 01:02:50.708: INFO: Pod "downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea": Phase="Pending", Reason="", readiness=false. Elapsed: 54.390935ms
Jun  5 01:02:52.761: INFO: Pod "downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107309915s
Jun  5 01:02:54.813: INFO: Pod "downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159483416s
STEP: Saw pod success
Jun  5 01:02:54.813: INFO: Pod "downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea" satisfied condition "Succeeded or Failed"
Jun  5 01:02:54.866: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea container client-container: <nil>
STEP: delete the pod
Jun  5 01:02:54.991: INFO: Waiting for pod downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea to disappear
Jun  5 01:02:55.043: INFO: Pod downwardapi-volume-52578439-0fe9-4d9a-919f-45c0db3aaeea no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:55.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9083" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:55.166: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-2f93c5a2-2c78-4497-8134-e4852d737839
STEP: Creating a pod to test consume configMaps
Jun  5 01:02:51.170: INFO: Waiting up to 5m0s for pod "pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356" in namespace "configmap-1264" to be "Succeeded or Failed"
Jun  5 01:02:51.221: INFO: Pod "pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356": Phase="Pending", Reason="", readiness=false. Elapsed: 51.176034ms
Jun  5 01:02:53.274: INFO: Pod "pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104152432s
Jun  5 01:02:55.326: INFO: Pod "pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155699561s
STEP: Saw pod success
Jun  5 01:02:55.326: INFO: Pod "pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356" satisfied condition "Succeeded or Failed"
Jun  5 01:02:55.378: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356 container configmap-volume-test: <nil>
STEP: delete the pod
Jun  5 01:02:55.494: INFO: Waiting for pod pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356 to disappear
Jun  5 01:02:55.545: INFO: Pod pod-configmaps-07ea4774-669e-4b85-92a0-c5fa6656f356 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:55.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1264" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Jun  5 01:02:47.445: INFO: PersistentVolumeClaim pvc-6tfd8 found but phase is Pending instead of Bound.
Jun  5 01:02:49.498: INFO: PersistentVolumeClaim pvc-6tfd8 found and phase=Bound (2.104380991s)
Jun  5 01:02:49.498: INFO: Waiting up to 3m0s for PersistentVolume local-2zd97 to have phase Bound
Jun  5 01:02:49.550: INFO: PersistentVolume local-2zd97 found and phase=Bound (51.838619ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fmf4
STEP: Creating a pod to test subpath
Jun  5 01:02:49.710: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fmf4" in namespace "provisioning-178" to be "Succeeded or Failed"
Jun  5 01:02:49.763: INFO: Pod "pod-subpath-test-preprovisionedpv-fmf4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.629032ms
Jun  5 01:02:51.815: INFO: Pod "pod-subpath-test-preprovisionedpv-fmf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104611973s
Jun  5 01:02:53.867: INFO: Pod "pod-subpath-test-preprovisionedpv-fmf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157315649s
Jun  5 01:02:55.920: INFO: Pod "pod-subpath-test-preprovisionedpv-fmf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209934128s
STEP: Saw pod success
Jun  5 01:02:55.920: INFO: Pod "pod-subpath-test-preprovisionedpv-fmf4" satisfied condition "Succeeded or Failed"
Jun  5 01:02:55.977: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-fmf4 container test-container-subpath-preprovisionedpv-fmf4: <nil>
STEP: delete the pod
Jun  5 01:02:56.090: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fmf4 to disappear
Jun  5 01:02:56.142: INFO: Pod pod-subpath-test-preprovisionedpv-fmf4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fmf4
Jun  5 01:02:56.142: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fmf4" in namespace "provisioning-178"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:55.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test substitution in volume subpath
Jun  5 01:02:55.978: INFO: Waiting up to 5m0s for pod "var-expansion-3c9b0328-8218-441f-a0f7-d0a4054d825d" in namespace "var-expansion-605" to be "Succeeded or Failed"
Jun  5 01:02:56.029: INFO: Pod "var-expansion-3c9b0328-8218-441f-a0f7-d0a4054d825d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.225739ms
Jun  5 01:02:58.081: INFO: Pod "var-expansion-3c9b0328-8218-441f-a0f7-d0a4054d825d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103119951s
STEP: Saw pod success
Jun  5 01:02:58.081: INFO: Pod "var-expansion-3c9b0328-8218-441f-a0f7-d0a4054d825d" satisfied condition "Succeeded or Failed"
Jun  5 01:02:58.133: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod var-expansion-3c9b0328-8218-441f-a0f7-d0a4054d825d container dapi-container: <nil>
STEP: delete the pod
Jun  5 01:02:58.247: INFO: Waiting for pod var-expansion-3c9b0328-8218-441f-a0f7-d0a4054d825d to disappear
Jun  5 01:02:58.298: INFO: Pod var-expansion-3c9b0328-8218-441f-a0f7-d0a4054d825d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:02:58.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-605" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":15,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:02:58.411: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 62 lines ...
Jun  5 01:02:32.908: INFO: PersistentVolumeClaim pvc-j6mtr found but phase is Pending instead of Bound.
Jun  5 01:02:34.968: INFO: PersistentVolumeClaim pvc-j6mtr found and phase=Bound (4.160932321s)
Jun  5 01:02:34.968: INFO: Waiting up to 3m0s for PersistentVolume local-8hkx9 to have phase Bound
Jun  5 01:02:35.019: INFO: PersistentVolume local-8hkx9 found and phase=Bound (51.10015ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n4bf
STEP: Creating a pod to test atomic-volume-subpath
Jun  5 01:02:35.172: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n4bf" in namespace "provisioning-4424" to be "Succeeded or Failed"
Jun  5 01:02:35.223: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Pending", Reason="", readiness=false. Elapsed: 50.662766ms
Jun  5 01:02:37.276: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104238945s
Jun  5 01:02:39.328: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 4.155524065s
Jun  5 01:02:41.379: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 6.206965179s
Jun  5 01:02:43.432: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 8.260095454s
Jun  5 01:02:45.488: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 10.315345937s
... skipping 2 lines ...
Jun  5 01:02:51.652: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 16.479486195s
Jun  5 01:02:53.703: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 18.530781844s
Jun  5 01:02:55.754: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 20.582057314s
Jun  5 01:02:57.806: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Running", Reason="", readiness=true. Elapsed: 22.634017995s
Jun  5 01:02:59.857: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.68525941s
STEP: Saw pod success
Jun  5 01:02:59.858: INFO: Pod "pod-subpath-test-preprovisionedpv-n4bf" satisfied condition "Succeeded or Failed"
Jun  5 01:02:59.908: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-n4bf container test-container-subpath-preprovisionedpv-n4bf: <nil>
STEP: delete the pod
Jun  5 01:03:00.019: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n4bf to disappear
Jun  5 01:03:00.071: INFO: Pod pod-subpath-test-preprovisionedpv-n4bf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n4bf
Jun  5 01:03:00.071: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n4bf" in namespace "provisioning-4424"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating pod pod-subpath-test-secret-vhkn
STEP: Creating a pod to test atomic-volume-subpath
Jun  5 01:02:38.931: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vhkn" in namespace "subpath-3347" to be "Succeeded or Failed"
Jun  5 01:02:38.983: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Pending", Reason="", readiness=false. Elapsed: 51.24308ms
Jun  5 01:02:41.034: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 2.102791018s
Jun  5 01:02:43.097: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 4.165531746s
Jun  5 01:02:45.149: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 6.217147974s
Jun  5 01:02:47.200: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 8.268564177s
Jun  5 01:02:49.251: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 10.32002625s
Jun  5 01:02:51.303: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 12.371794609s
Jun  5 01:02:53.355: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 14.423155508s
Jun  5 01:02:55.407: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 16.475667756s
Jun  5 01:02:57.459: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 18.527562959s
Jun  5 01:02:59.511: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Running", Reason="", readiness=true. Elapsed: 20.579326426s
Jun  5 01:03:01.562: INFO: Pod "pod-subpath-test-secret-vhkn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.630957912s
STEP: Saw pod success
Jun  5 01:03:01.562: INFO: Pod "pod-subpath-test-secret-vhkn" satisfied condition "Succeeded or Failed"
Jun  5 01:03:01.617: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-secret-vhkn container test-container-subpath-secret-vhkn: <nil>
STEP: delete the pod
Jun  5 01:03:01.740: INFO: Waiting for pod pod-subpath-test-secret-vhkn to disappear
Jun  5 01:03:01.791: INFO: Pod pod-subpath-test-secret-vhkn no longer exists
STEP: Deleting pod pod-subpath-test-secret-vhkn
Jun  5 01:03:01.791: INFO: Deleting pod "pod-subpath-test-secret-vhkn" in namespace "subpath-3347"
... skipping 36 lines ...
• [SLOW TEST:108.718 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:142
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":7,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:50.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:03.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3271" for this suite.


• [SLOW TEST:12.514 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":8,"skipped":68,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:03.165: INFO: Distro debian doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Jun  5 01:02:58.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376" in namespace "downward-api-1115" to be "Succeeded or Failed"
Jun  5 01:02:58.799: INFO: Pod "downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376": Phase="Pending", Reason="", readiness=false. Elapsed: 51.071408ms
Jun  5 01:03:00.853: INFO: Pod "downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104472452s
Jun  5 01:03:02.904: INFO: Pod "downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156099638s
STEP: Saw pod success
Jun  5 01:03:02.904: INFO: Pod "downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376" satisfied condition "Succeeded or Failed"
Jun  5 01:03:02.956: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376 container client-container: <nil>
STEP: delete the pod
Jun  5 01:03:03.147: INFO: Waiting for pod downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376 to disappear
Jun  5 01:03:03.198: INFO: Pod downwardapi-volume-a45f7eb4-784e-4b2d-8e9d-f1bf5d4f7376 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:03.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1115" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 36 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
Jun  5 01:03:02.086: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4005faf9-7457-478d-99c3-e9f0a595a552" in namespace "security-context-test-9917" to be "Succeeded or Failed"
Jun  5 01:03:02.137: INFO: Pod "busybox-user-65534-4005faf9-7457-478d-99c3-e9f0a595a552": Phase="Pending", Reason="", readiness=false. Elapsed: 50.834212ms
Jun  5 01:03:04.188: INFO: Pod "busybox-user-65534-4005faf9-7457-478d-99c3-e9f0a595a552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10190425s
Jun  5 01:03:04.188: INFO: Pod "busybox-user-65534-4005faf9-7457-478d-99c3-e9f0a595a552" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:04.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9917" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":42,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:01.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun  5 01:03:02.269: INFO: Waiting up to 5m0s for pod "pod-37a17ab1-040b-4ba0-bfe5-8784ed737189" in namespace "emptydir-1511" to be "Succeeded or Failed"
Jun  5 01:03:02.324: INFO: Pod "pod-37a17ab1-040b-4ba0-bfe5-8784ed737189": Phase="Pending", Reason="", readiness=false. Elapsed: 54.659549ms
Jun  5 01:03:04.376: INFO: Pod "pod-37a17ab1-040b-4ba0-bfe5-8784ed737189": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.106775255s
STEP: Saw pod success
Jun  5 01:03:04.376: INFO: Pod "pod-37a17ab1-040b-4ba0-bfe5-8784ed737189" satisfied condition "Succeeded or Failed"
Jun  5 01:03:04.428: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-37a17ab1-040b-4ba0-bfe5-8784ed737189 container test-container: <nil>
STEP: delete the pod
Jun  5 01:03:04.548: INFO: Waiting for pod pod-37a17ab1-040b-4ba0-bfe5-8784ed737189 to disappear
Jun  5 01:03:04.599: INFO: Pod pod-37a17ab1-040b-4ba0-bfe5-8784ed737189 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:04.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1511" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:04.725: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 58 lines ...
• [SLOW TEST:29.747 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:116
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":6,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:06.944: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:06.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7077" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":75,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:07.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-2077
[It] should not deadlock when a pod's predecessor fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248
STEP: Creating statefulset ss in namespace statefulset-2077
Jun  5 01:03:07.440: INFO: error finding default storageClass : No default storage class found
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
Jun  5 01:03:07.441: INFO: Deleting all statefulset in ns statefulset-2077
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:07.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
    should not deadlock when a pod's predecessor fails [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248

    error finding default storageClass : No default storage class found

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:830
------------------------------
SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 86 lines ...
Jun  5 01:03:02.340: INFO: PersistentVolumeClaim pvc-jl5hb found but phase is Pending instead of Bound.
Jun  5 01:03:04.393: INFO: PersistentVolumeClaim pvc-jl5hb found and phase=Bound (6.208828659s)
Jun  5 01:03:04.393: INFO: Waiting up to 3m0s for PersistentVolume local-lc4f2 to have phase Bound
Jun  5 01:03:04.446: INFO: PersistentVolume local-lc4f2 found and phase=Bound (53.011372ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-8ncw
STEP: Creating a pod to test exec-volume-test
Jun  5 01:03:04.604: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8ncw" in namespace "volume-5197" to be "Succeeded or Failed"
Jun  5 01:03:04.663: INFO: Pod "exec-volume-test-preprovisionedpv-8ncw": Phase="Pending", Reason="", readiness=false. Elapsed: 59.054551ms
Jun  5 01:03:06.716: INFO: Pod "exec-volume-test-preprovisionedpv-8ncw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.111376915s
STEP: Saw pod success
Jun  5 01:03:06.716: INFO: Pod "exec-volume-test-preprovisionedpv-8ncw" satisfied condition "Succeeded or Failed"
Jun  5 01:03:06.768: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-8ncw container exec-container-preprovisionedpv-8ncw: <nil>
STEP: delete the pod
Jun  5 01:03:06.889: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8ncw to disappear
Jun  5 01:03:06.941: INFO: Pod exec-volume-test-preprovisionedpv-8ncw no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-8ncw
Jun  5 01:03:06.941: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8ncw" in namespace "volume-5197"
... skipping 19 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:07.752: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:08.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3961" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":11,"skipped":43,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:08.592: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 108 lines ...
Jun  5 01:03:02.053: INFO: PersistentVolumeClaim pvc-dw8sl found but phase is Pending instead of Bound.
Jun  5 01:03:04.104: INFO: PersistentVolumeClaim pvc-dw8sl found and phase=Bound (14.416087055s)
Jun  5 01:03:04.104: INFO: Waiting up to 3m0s for PersistentVolume local-f5btn to have phase Bound
Jun  5 01:03:04.155: INFO: PersistentVolume local-f5btn found and phase=Bound (51.314398ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vnfk
STEP: Creating a pod to test subpath
Jun  5 01:03:04.310: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vnfk" in namespace "provisioning-7908" to be "Succeeded or Failed"
Jun  5 01:03:04.361: INFO: Pod "pod-subpath-test-preprovisionedpv-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 51.374803ms
Jun  5 01:03:06.413: INFO: Pod "pod-subpath-test-preprovisionedpv-vnfk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103169795s
Jun  5 01:03:08.465: INFO: Pod "pod-subpath-test-preprovisionedpv-vnfk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154743636s
STEP: Saw pod success
Jun  5 01:03:08.465: INFO: Pod "pod-subpath-test-preprovisionedpv-vnfk" satisfied condition "Succeeded or Failed"
Jun  5 01:03:08.527: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-vnfk container test-container-subpath-preprovisionedpv-vnfk: <nil>
STEP: delete the pod
Jun  5 01:03:08.644: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vnfk to disappear
Jun  5 01:03:08.695: INFO: Pod pod-subpath-test-preprovisionedpv-vnfk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vnfk
Jun  5 01:03:08.695: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vnfk" in namespace "provisioning-7908"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:09.555: INFO: Only supported for providers [vsphere] (not aws)
... skipping 47 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-e857c62a-e614-41cf-9302-538cf20c5d34
STEP: Creating a pod to test consume secrets
Jun  5 01:03:08.124: INFO: Waiting up to 5m0s for pod "pod-secrets-2b134439-5701-421f-96aa-518a28b2f981" in namespace "secrets-8257" to be "Succeeded or Failed"
Jun  5 01:03:08.175: INFO: Pod "pod-secrets-2b134439-5701-421f-96aa-518a28b2f981": Phase="Pending", Reason="", readiness=false. Elapsed: 51.170007ms
Jun  5 01:03:10.227: INFO: Pod "pod-secrets-2b134439-5701-421f-96aa-518a28b2f981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.102644689s
STEP: Saw pod success
Jun  5 01:03:10.227: INFO: Pod "pod-secrets-2b134439-5701-421f-96aa-518a28b2f981" satisfied condition "Succeeded or Failed"
Jun  5 01:03:10.278: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-secrets-2b134439-5701-421f-96aa-518a28b2f981 container secret-volume-test: <nil>
STEP: delete the pod
Jun  5 01:03:10.388: INFO: Waiting for pod pod-secrets-2b134439-5701-421f-96aa-518a28b2f981 to disappear
Jun  5 01:03:10.440: INFO: Pod pod-secrets-2b134439-5701-421f-96aa-518a28b2f981 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:10.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8257" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:10.553: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 94 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
Jun  5 01:03:07.227: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun  5 01:03:07.280: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-bdqg
STEP: Creating a pod to test subpath
Jun  5 01:03:07.335: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-bdqg" in namespace "provisioning-7639" to be "Succeeded or Failed"
Jun  5 01:03:07.391: INFO: Pod "pod-subpath-test-inlinevolume-bdqg": Phase="Pending", Reason="", readiness=false. Elapsed: 56.665191ms
Jun  5 01:03:09.443: INFO: Pod "pod-subpath-test-inlinevolume-bdqg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108095841s
Jun  5 01:03:11.495: INFO: Pod "pod-subpath-test-inlinevolume-bdqg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160189411s
STEP: Saw pod success
Jun  5 01:03:11.495: INFO: Pod "pod-subpath-test-inlinevolume-bdqg" satisfied condition "Succeeded or Failed"
Jun  5 01:03:11.546: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-bdqg container test-container-subpath-inlinevolume-bdqg: <nil>
STEP: delete the pod
Jun  5 01:03:11.658: INFO: Waiting for pod pod-subpath-test-inlinevolume-bdqg to disappear
Jun  5 01:03:11.710: INFO: Pod pod-subpath-test-inlinevolume-bdqg no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-bdqg
Jun  5 01:03:11.710: INFO: Deleting pod "pod-subpath-test-inlinevolume-bdqg" in namespace "provisioning-7639"
... skipping 27 lines ...
• [SLOW TEST:9.403 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:13.750: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 43 lines ...
Jun  5 01:03:03.789: INFO: PersistentVolumeClaim pvc-wnj7v found but phase is Pending instead of Bound.
Jun  5 01:03:05.840: INFO: PersistentVolumeClaim pvc-wnj7v found and phase=Bound (8.255947677s)
Jun  5 01:03:05.840: INFO: Waiting up to 3m0s for PersistentVolume local-qnzkj to have phase Bound
Jun  5 01:03:05.890: INFO: PersistentVolume local-qnzkj found and phase=Bound (50.08581ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-42fz
STEP: Creating a pod to test subpath
Jun  5 01:03:06.042: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-42fz" in namespace "provisioning-9776" to be "Succeeded or Failed"
Jun  5 01:03:06.093: INFO: Pod "pod-subpath-test-preprovisionedpv-42fz": Phase="Pending", Reason="", readiness=false. Elapsed: 50.027561ms
Jun  5 01:03:08.143: INFO: Pod "pod-subpath-test-preprovisionedpv-42fz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10023325s
Jun  5 01:03:10.193: INFO: Pod "pod-subpath-test-preprovisionedpv-42fz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150769357s
Jun  5 01:03:12.246: INFO: Pod "pod-subpath-test-preprovisionedpv-42fz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203578393s
STEP: Saw pod success
Jun  5 01:03:12.246: INFO: Pod "pod-subpath-test-preprovisionedpv-42fz" satisfied condition "Succeeded or Failed"
Jun  5 01:03:12.297: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-42fz container test-container-subpath-preprovisionedpv-42fz: <nil>
STEP: delete the pod
Jun  5 01:03:12.408: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-42fz to disappear
Jun  5 01:03:12.457: INFO: Pod pod-subpath-test-preprovisionedpv-42fz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-42fz
Jun  5 01:03:12.458: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-42fz" in namespace "provisioning-9776"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":12,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 25 lines ...
Jun  5 01:03:13.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun  5 01:03:14.074: INFO: Waiting up to 5m0s for pod "pod-17effc69-1900-4942-a556-31e424bf754c" in namespace "emptydir-9711" to be "Succeeded or Failed"
Jun  5 01:03:14.125: INFO: Pod "pod-17effc69-1900-4942-a556-31e424bf754c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.072892ms
Jun  5 01:03:16.191: INFO: Pod "pod-17effc69-1900-4942-a556-31e424bf754c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.117032799s
STEP: Saw pod success
Jun  5 01:03:16.191: INFO: Pod "pod-17effc69-1900-4942-a556-31e424bf754c" satisfied condition "Succeeded or Failed"
Jun  5 01:03:16.250: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-17effc69-1900-4942-a556-31e424bf754c container test-container: <nil>
STEP: delete the pod
Jun  5 01:03:16.425: INFO: Waiting for pod pod-17effc69-1900-4942-a556-31e424bf754c to disappear
Jun  5 01:03:16.480: INFO: Pod pod-17effc69-1900-4942-a556-31e424bf754c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:16.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9711" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:16.601: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:17.180: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 236 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:347
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:20.371: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 108 lines ...
• [SLOW TEST:87.556 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 43 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:241
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":6,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:22.204: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 127 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:81
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:206
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":8,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:22.494: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 23 lines ...
Jun  5 01:03:19.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:109
STEP: Creating a pod to test downward api env vars
Jun  5 01:03:20.232: INFO: Waiting up to 5m0s for pod "downward-api-e398cbab-11dd-4442-9ceb-f15d623339e1" in namespace "downward-api-4777" to be "Succeeded or Failed"
Jun  5 01:03:20.284: INFO: Pod "downward-api-e398cbab-11dd-4442-9ceb-f15d623339e1": Phase="Pending", Reason="", readiness=false. Elapsed: 51.927152ms
Jun  5 01:03:22.337: INFO: Pod "downward-api-e398cbab-11dd-4442-9ceb-f15d623339e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104113427s
STEP: Saw pod success
Jun  5 01:03:22.337: INFO: Pod "downward-api-e398cbab-11dd-4442-9ceb-f15d623339e1" satisfied condition "Succeeded or Failed"
Jun  5 01:03:22.389: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod downward-api-e398cbab-11dd-4442-9ceb-f15d623339e1 container dapi-container: <nil>
STEP: delete the pod
Jun  5 01:03:22.504: INFO: Waiting for pod downward-api-e398cbab-11dd-4442-9ceb-f15d623339e1 to disappear
Jun  5 01:03:22.556: INFO: Pod downward-api-e398cbab-11dd-4442-9ceb-f15d623339e1 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:22.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4777" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":10,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:22.672: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 224 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:26.480: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 107 lines ...
• [SLOW TEST:38.832 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:75
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":7,"skipped":113,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:26.891: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating a pod to test downward API volume plugin
Jun  5 01:03:23.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213" in namespace "downward-api-1774" to be "Succeeded or Failed"
Jun  5 01:03:23.482: INFO: Pod "downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213": Phase="Pending", Reason="", readiness=false. Elapsed: 51.69887ms
Jun  5 01:03:25.546: INFO: Pod "downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115831314s
Jun  5 01:03:27.599: INFO: Pod "downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168298004s
STEP: Saw pod success
Jun  5 01:03:27.599: INFO: Pod "downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213" satisfied condition "Succeeded or Failed"
Jun  5 01:03:27.651: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213 container client-container: <nil>
STEP: delete the pod
Jun  5 01:03:27.768: INFO: Waiting for pod downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213 to disappear
Jun  5 01:03:27.820: INFO: Pod downwardapi-volume-b8c3a115-b0ee-4f67-b5f5-c44fb7327213 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:27.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1774" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:27.935: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 38 lines ...
STEP: Creating a kubernetes client
Jun  5 01:03:21.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:145
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:778
STEP: creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Jun  5 01:03:22.187: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jun  5 01:03:28.293: INFO: deleting claim "volume-provisioning-3664"/"pvc-kcnfx"
Jun  5 01:03:28.347: INFO: deleting storage class volume-provisioning-3664-invalid-aws
... skipping 5 lines ...

• [SLOW TEST:6.637 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:777
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:778
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":5,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:28.530: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 83 lines ...
Jun  5 01:03:18.607: INFO: PersistentVolumeClaim pvc-4b7qz found but phase is Pending instead of Bound.
Jun  5 01:03:20.659: INFO: PersistentVolumeClaim pvc-4b7qz found and phase=Bound (12.367718618s)
Jun  5 01:03:20.659: INFO: Waiting up to 3m0s for PersistentVolume local-cjxd6 to have phase Bound
Jun  5 01:03:20.711: INFO: PersistentVolume local-cjxd6 found and phase=Bound (51.470573ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c926
STEP: Creating a pod to test subpath
Jun  5 01:03:20.866: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c926" in namespace "provisioning-8414" to be "Succeeded or Failed"
Jun  5 01:03:20.917: INFO: Pod "pod-subpath-test-preprovisionedpv-c926": Phase="Pending", Reason="", readiness=false. Elapsed: 51.294069ms
Jun  5 01:03:22.969: INFO: Pod "pod-subpath-test-preprovisionedpv-c926": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103005542s
Jun  5 01:03:25.021: INFO: Pod "pod-subpath-test-preprovisionedpv-c926": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154729818s
Jun  5 01:03:27.073: INFO: Pod "pod-subpath-test-preprovisionedpv-c926": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206419246s
Jun  5 01:03:29.124: INFO: Pod "pod-subpath-test-preprovisionedpv-c926": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.258173853s
STEP: Saw pod success
Jun  5 01:03:29.124: INFO: Pod "pod-subpath-test-preprovisionedpv-c926" satisfied condition "Succeeded or Failed"
Jun  5 01:03:29.176: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-c926 container test-container-volume-preprovisionedpv-c926: <nil>
STEP: delete the pod
Jun  5 01:03:29.298: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c926 to disappear
Jun  5 01:03:29.349: INFO: Pod pod-subpath-test-preprovisionedpv-c926 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c926
Jun  5 01:03:29.349: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c926" in namespace "provisioning-8414"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":17,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:30.516: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
Jun  5 01:03:27.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:149
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jun  5 01:03:28.269: INFO: Waiting up to 5m0s for pod "security-context-dbb94fea-a304-4e36-bb22-9f3d77233171" in namespace "security-context-9985" to be "Succeeded or Failed"
Jun  5 01:03:28.321: INFO: Pod "security-context-dbb94fea-a304-4e36-bb22-9f3d77233171": Phase="Pending", Reason="", readiness=false. Elapsed: 51.867573ms
Jun  5 01:03:30.374: INFO: Pod "security-context-dbb94fea-a304-4e36-bb22-9f3d77233171": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.104179917s
STEP: Saw pod success
Jun  5 01:03:30.374: INFO: Pod "security-context-dbb94fea-a304-4e36-bb22-9f3d77233171" satisfied condition "Succeeded or Failed"
Jun  5 01:03:30.426: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod security-context-dbb94fea-a304-4e36-bb22-9f3d77233171 container test-container: <nil>
STEP: delete the pod
Jun  5 01:03:30.547: INFO: Waiting for pod security-context-dbb94fea-a304-4e36-bb22-9f3d77233171 to disappear
Jun  5 01:03:30.601: INFO: Pod security-context-dbb94fea-a304-4e36-bb22-9f3d77233171 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:30.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9985" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":12,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:30.722: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 141 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:265
------------------------------
... skipping 67 lines ...
• [SLOW TEST:70.981 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:33.663: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-9de479fe-81b8-4ce5-a0fc-043647ecd5a4
STEP: Creating a pod to test consume configMaps
Jun  5 01:03:31.207: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a" in namespace "projected-1013" to be "Succeeded or Failed"
Jun  5 01:03:31.259: INFO: Pod "pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.797486ms
Jun  5 01:03:33.311: INFO: Pod "pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103998619s
Jun  5 01:03:35.363: INFO: Pod "pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.15625124s
STEP: Saw pod success
Jun  5 01:03:35.363: INFO: Pod "pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a" satisfied condition "Succeeded or Failed"
Jun  5 01:03:35.415: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun  5 01:03:35.530: INFO: Waiting for pod pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a to disappear
Jun  5 01:03:35.582: INFO: Pod pod-projected-configmaps-64c86a2d-aa2e-4b7a-9253-f676fae3169a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:35.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1013" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:35.705: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
Jun  5 01:03:16.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
Jun  5 01:03:16.943: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jun  5 01:03:17.049: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-966" in namespace "provisioning-966" to be "Succeeded or Failed"
Jun  5 01:03:17.100: INFO: Pod "hostpath-symlink-prep-provisioning-966": Phase="Pending", Reason="", readiness=false. Elapsed: 50.674773ms
Jun  5 01:03:19.152: INFO: Pod "hostpath-symlink-prep-provisioning-966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102329425s
Jun  5 01:03:21.203: INFO: Pod "hostpath-symlink-prep-provisioning-966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154158409s
STEP: Saw pod success
Jun  5 01:03:21.203: INFO: Pod "hostpath-symlink-prep-provisioning-966" satisfied condition "Succeeded or Failed"
Jun  5 01:03:21.204: INFO: Deleting pod "hostpath-symlink-prep-provisioning-966" in namespace "provisioning-966"
Jun  5 01:03:21.261: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-966" to be fully deleted
Jun  5 01:03:21.312: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-ns4b
Jun  5 01:03:25.487: INFO: Running '/tmp/kubectl2084640272/kubectl --server=https://api.e2e-2636771260-f3fa8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-966 exec pod-subpath-test-inlinevolume-ns4b --container test-container-volume-inlinevolume-ns4b -- /bin/sh -c rm -r /test-volume/provisioning-966'
Jun  5 01:03:26.130: INFO: stderr: ""
Jun  5 01:03:26.130: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-ns4b
Jun  5 01:03:26.130: INFO: Deleting pod "pod-subpath-test-inlinevolume-ns4b" in namespace "provisioning-966"
Jun  5 01:03:26.182: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-ns4b" to be fully deleted
STEP: Deleting pod
Jun  5 01:03:34.284: INFO: Deleting pod "pod-subpath-test-inlinevolume-ns4b" in namespace "provisioning-966"
Jun  5 01:03:34.388: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-966" in namespace "provisioning-966" to be "Succeeded or Failed"
Jun  5 01:03:34.438: INFO: Pod "hostpath-symlink-prep-provisioning-966": Phase="Pending", Reason="", readiness=false. Elapsed: 50.486795ms
Jun  5 01:03:36.491: INFO: Pod "hostpath-symlink-prep-provisioning-966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.103328225s
STEP: Saw pod success
Jun  5 01:03:36.491: INFO: Pod "hostpath-symlink-prep-provisioning-966" satisfied condition "Succeeded or Failed"
Jun  5 01:03:36.491: INFO: Deleting pod "hostpath-symlink-prep-provisioning-966" in namespace "provisioning-966"
Jun  5 01:03:36.568: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-966" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:36.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-966" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":10,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:36.748: INFO: Only supported for providers [azure] (not aws)
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":13,"skipped":71,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:39.809: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 152 lines ...
Jun  5 01:03:33.622: INFO: PersistentVolumeClaim pvc-spm8l found but phase is Pending instead of Bound.
Jun  5 01:03:35.675: INFO: PersistentVolumeClaim pvc-spm8l found and phase=Bound (4.156907986s)
Jun  5 01:03:35.675: INFO: Waiting up to 3m0s for PersistentVolume local-j6hcz to have phase Bound
Jun  5 01:03:35.731: INFO: PersistentVolume local-j6hcz found and phase=Bound (55.321129ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6xgj
STEP: Creating a pod to test exec-volume-test
Jun  5 01:03:35.893: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6xgj" in namespace "volume-24" to be "Succeeded or Failed"
Jun  5 01:03:35.944: INFO: Pod "exec-volume-test-preprovisionedpv-6xgj": Phase="Pending", Reason="", readiness=false. Elapsed: 51.663072ms
Jun  5 01:03:37.997: INFO: Pod "exec-volume-test-preprovisionedpv-6xgj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10435955s
Jun  5 01:03:40.049: INFO: Pod "exec-volume-test-preprovisionedpv-6xgj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156417967s
STEP: Saw pod success
Jun  5 01:03:40.049: INFO: Pod "exec-volume-test-preprovisionedpv-6xgj" satisfied condition "Succeeded or Failed"
Jun  5 01:03:40.101: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-6xgj container exec-container-preprovisionedpv-6xgj: <nil>
STEP: delete the pod
Jun  5 01:03:40.215: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6xgj to disappear
Jun  5 01:03:40.267: INFO: Pod exec-volume-test-preprovisionedpv-6xgj no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6xgj
Jun  5 01:03:40.267: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6xgj" in namespace "volume-24"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:40.986: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":114,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:41.950: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:29.799: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
Jun  5 01:03:32.809: INFO: PersistentVolumeClaim pvc-49scf found but phase is Pending instead of Bound.
Jun  5 01:03:34.861: INFO: PersistentVolumeClaim pvc-49scf found and phase=Bound (2.1011779s)
Jun  5 01:03:34.861: INFO: Waiting up to 3m0s for PersistentVolume local-wz5v2 to have phase Bound
Jun  5 01:03:34.912: INFO: PersistentVolume local-wz5v2 found and phase=Bound (51.582961ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-v9rw
STEP: Creating a pod to test subpath
Jun  5 01:03:35.065: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-v9rw" in namespace "provisioning-9110" to be "Succeeded or Failed"
Jun  5 01:03:35.115: INFO: Pod "pod-subpath-test-preprovisionedpv-v9rw": Phase="Pending", Reason="", readiness=false. Elapsed: 50.272349ms
Jun  5 01:03:37.165: INFO: Pod "pod-subpath-test-preprovisionedpv-v9rw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100657887s
Jun  5 01:03:39.216: INFO: Pod "pod-subpath-test-preprovisionedpv-v9rw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.151542067s
Jun  5 01:03:41.267: INFO: Pod "pod-subpath-test-preprovisionedpv-v9rw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202037338s
STEP: Saw pod success
Jun  5 01:03:41.267: INFO: Pod "pod-subpath-test-preprovisionedpv-v9rw" satisfied condition "Succeeded or Failed"
Jun  5 01:03:41.321: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-v9rw container test-container-volume-preprovisionedpv-v9rw: <nil>
STEP: delete the pod
Jun  5 01:03:41.440: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-v9rw to disappear
Jun  5 01:03:41.490: INFO: Pod pod-subpath-test-preprovisionedpv-v9rw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-v9rw
Jun  5 01:03:41.490: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-v9rw" in namespace "provisioning-9110"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:42.347: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 134 lines ...
• [SLOW TEST:97.451 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:223
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":7,"skipped":56,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:100
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 7 lines ...
Jun  5 01:03:12.186: INFO: Creating resource for dynamic PV
Jun  5 01:03:12.186: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-1174-aws-scdp87c
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jun  5 01:03:12.347: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jun  5 01:03:12.461: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:14.567: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:16.572: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:18.566: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:20.565: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:22.566: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:24.564: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:26.568: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:28.581: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:30.565: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:32.571: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:34.565: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:36.571: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:38.565: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:40.565: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:42.568: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-1174-aws-scdp87c",
  	... // 2 identical fields
  }

Jun  5 01:03:42.672: INFO: Error updating pvc awsp6dxn: PersistentVolumeClaim "awsp6dxn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:154
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":8,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:42.965: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 36 lines ...
Jun  5 01:03:33.596: INFO: PersistentVolumeClaim pvc-7vbn6 found but phase is Pending instead of Bound.
Jun  5 01:03:35.648: INFO: PersistentVolumeClaim pvc-7vbn6 found and phase=Bound (2.104604368s)
Jun  5 01:03:35.648: INFO: Waiting up to 3m0s for PersistentVolume local-n6md5 to have phase Bound
Jun  5 01:03:35.699: INFO: PersistentVolume local-n6md5 found and phase=Bound (51.424125ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jbzz
STEP: Creating a pod to test subpath
Jun  5 01:03:35.871: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jbzz" in namespace "provisioning-2051" to be "Succeeded or Failed"
Jun  5 01:03:35.924: INFO: Pod "pod-subpath-test-preprovisionedpv-jbzz": Phase="Pending", Reason="", readiness=false. Elapsed: 53.06769ms
Jun  5 01:03:37.976: INFO: Pod "pod-subpath-test-preprovisionedpv-jbzz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10459073s
Jun  5 01:03:40.027: INFO: Pod "pod-subpath-test-preprovisionedpv-jbzz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156069999s
Jun  5 01:03:42.079: INFO: Pod "pod-subpath-test-preprovisionedpv-jbzz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207877459s
STEP: Saw pod success
Jun  5 01:03:42.079: INFO: Pod "pod-subpath-test-preprovisionedpv-jbzz" satisfied condition "Succeeded or Failed"
Jun  5 01:03:42.130: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-jbzz container test-container-subpath-preprovisionedpv-jbzz: <nil>
STEP: delete the pod
Jun  5 01:03:42.261: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jbzz to disappear
Jun  5 01:03:42.313: INFO: Pod pod-subpath-test-preprovisionedpv-jbzz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jbzz
Jun  5 01:03:42.313: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jbzz" in namespace "provisioning-2051"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":18,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:43.125: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 112 lines ...
Jun  5 01:03:18.767: INFO: PersistentVolumeClaim pvc-vpvxk found but phase is Pending instead of Bound.
Jun  5 01:03:20.818: INFO: PersistentVolumeClaim pvc-vpvxk found and phase=Bound (8.26658747s)
Jun  5 01:03:20.818: INFO: Waiting up to 3m0s for PersistentVolume local-tzwd5 to have phase Bound
Jun  5 01:03:20.877: INFO: PersistentVolume local-tzwd5 found and phase=Bound (58.340265ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wg2m
STEP: Creating a pod to test atomic-volume-subpath
Jun  5 01:03:21.033: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wg2m" in namespace "provisioning-3452" to be "Succeeded or Failed"
Jun  5 01:03:21.085: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Pending", Reason="", readiness=false. Elapsed: 51.224729ms
Jun  5 01:03:23.136: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102910918s
Jun  5 01:03:25.188: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15490631s
Jun  5 01:03:27.242: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 6.208313873s
Jun  5 01:03:29.294: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 8.260167336s
Jun  5 01:03:31.345: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 10.311941653s
Jun  5 01:03:33.397: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 12.363773889s
Jun  5 01:03:35.449: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 14.415244322s
Jun  5 01:03:37.501: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 16.467789873s
Jun  5 01:03:39.553: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 18.519160272s
Jun  5 01:03:41.604: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Running", Reason="", readiness=true. Elapsed: 20.570528681s
Jun  5 01:03:43.658: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.624373546s
STEP: Saw pod success
Jun  5 01:03:43.658: INFO: Pod "pod-subpath-test-preprovisionedpv-wg2m" satisfied condition "Succeeded or Failed"
Jun  5 01:03:43.724: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-wg2m container test-container-subpath-preprovisionedpv-wg2m: <nil>
STEP: delete the pod
Jun  5 01:03:43.894: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wg2m to disappear
Jun  5 01:03:43.946: INFO: Pod pod-subpath-test-preprovisionedpv-wg2m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wg2m
Jun  5 01:03:43.946: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wg2m" in namespace "provisioning-3452"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":61,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:45.167: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 20 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:45.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name secret-emptykey-test-5d9ba07b-58d7-4ad9-9622-4a900513db6e
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:45.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8108" for this suite.
... skipping 25 lines ...
• [SLOW TEST:249.605 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
Jun  5 01:03:40.158: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jun  5 01:03:40.158: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-k49s
STEP: Creating a pod to test subpath
Jun  5 01:03:40.216: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-k49s" in namespace "provisioning-2943" to be "Succeeded or Failed"
Jun  5 01:03:40.267: INFO: Pod "pod-subpath-test-inlinevolume-k49s": Phase="Pending", Reason="", readiness=false. Elapsed: 51.018389ms
Jun  5 01:03:42.319: INFO: Pod "pod-subpath-test-inlinevolume-k49s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102511564s
Jun  5 01:03:44.371: INFO: Pod "pod-subpath-test-inlinevolume-k49s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154258938s
Jun  5 01:03:46.422: INFO: Pod "pod-subpath-test-inlinevolume-k49s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205943496s
Jun  5 01:03:48.506: INFO: Pod "pod-subpath-test-inlinevolume-k49s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.289494294s
STEP: Saw pod success
Jun  5 01:03:48.506: INFO: Pod "pod-subpath-test-inlinevolume-k49s" satisfied condition "Succeeded or Failed"
Jun  5 01:03:48.566: INFO: Trying to get logs from node ip-172-20-63-110.us-west-1.compute.internal pod pod-subpath-test-inlinevolume-k49s container test-container-subpath-inlinevolume-k49s: <nil>
STEP: delete the pod
Jun  5 01:03:48.690: INFO: Waiting for pod pod-subpath-test-inlinevolume-k49s to disappear
Jun  5 01:03:48.741: INFO: Pod pod-subpath-test-inlinevolume-k49s no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-k49s
Jun  5 01:03:48.741: INFO: Deleting pod "pod-subpath-test-inlinevolume-k49s" in namespace "provisioning-2943"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":14,"skipped":100,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:48.965: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:441
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:49.085: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name configmap-test-volume-map-7ff30b5d-6db6-47bb-8254-33bbc459daa0
STEP: Creating a pod to test consume configMaps
Jun  5 01:03:42.785: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00" in namespace "configmap-5431" to be "Succeeded or Failed"
Jun  5 01:03:42.836: INFO: Pod "pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00": Phase="Pending", Reason="", readiness=false. Elapsed: 50.32475ms
Jun  5 01:03:44.887: INFO: Pod "pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101936612s
Jun  5 01:03:46.945: INFO: Pod "pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159296101s
Jun  5 01:03:48.995: INFO: Pod "pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209513518s
STEP: Saw pod success
Jun  5 01:03:48.995: INFO: Pod "pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00" satisfied condition "Succeeded or Failed"
Jun  5 01:03:49.045: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:03:49.156: INFO: Waiting for pod pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00 to disappear
Jun  5 01:03:49.206: INFO: Pod pod-configmaps-cb2772c2-c608-42bc-993a-951bd74dac00 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.888 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":66,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:49.332: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 37 lines ...
• [SLOW TEST:36.075 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1055
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":13,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:50.651: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":8,"skipped":70,"failed":0}
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:45.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
Jun  5 01:03:10.892: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-1962-aws-sctvxmn
STEP: creating a claim
Jun  5 01:03:10.944: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-jjfs
STEP: Creating a pod to test subpath
Jun  5 01:03:11.102: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-jjfs" in namespace "provisioning-1962" to be "Succeeded or Failed"
Jun  5 01:03:11.154: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Pending", Reason="", readiness=false. Elapsed: 51.385916ms
Jun  5 01:03:13.206: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103893344s
Jun  5 01:03:15.259: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157138018s
Jun  5 01:03:17.311: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208581987s
Jun  5 01:03:19.362: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260112453s
Jun  5 01:03:21.414: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.311694702s
Jun  5 01:03:23.465: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.363041775s
Jun  5 01:03:25.524: INFO: Pod "pod-subpath-test-dynamicpv-jjfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.421336864s
STEP: Saw pod success
Jun  5 01:03:25.524: INFO: Pod "pod-subpath-test-dynamicpv-jjfs" satisfied condition "Succeeded or Failed"
Jun  5 01:03:25.578: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-jjfs container test-container-subpath-dynamicpv-jjfs: <nil>
STEP: delete the pod
Jun  5 01:03:25.712: INFO: Waiting for pod pod-subpath-test-dynamicpv-jjfs to disappear
Jun  5 01:03:25.763: INFO: Pod pod-subpath-test-dynamicpv-jjfs no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-jjfs
Jun  5 01:03:25.763: INFO: Deleting pod "pod-subpath-test-dynamicpv-jjfs" in namespace "provisioning-1962"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":107,"failed":0}

SS
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":9,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:51.472: INFO: >>> kubeConfig: /root/.kube/config
... skipping 53 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:48
STEP: Creating a pod to test hostPath mode
Jun  5 01:03:49.285: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8147" to be "Succeeded or Failed"
Jun  5 01:03:49.336: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 51.342238ms
Jun  5 01:03:51.422: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137159693s
Jun  5 01:03:53.474: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189003645s
STEP: Saw pod success
Jun  5 01:03:53.474: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jun  5 01:03:53.525: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jun  5 01:03:53.638: INFO: Waiting for pod pod-host-path-test to disappear
Jun  5 01:03:53.697: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 27 lines ...
• [SLOW TEST:255.638 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624
  should *not* be restarted with a non-local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:285
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:53.869: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating secret with name secret-test-1025f2d9-4944-4a27-bfa3-7c034607924f
STEP: Creating a pod to test consume secrets
Jun  5 01:03:51.289: INFO: Waiting up to 5m0s for pod "pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892" in namespace "secrets-2912" to be "Succeeded or Failed"
Jun  5 01:03:51.371: INFO: Pod "pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892": Phase="Pending", Reason="", readiness=false. Elapsed: 82.12675ms
Jun  5 01:03:53.422: INFO: Pod "pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132978879s
Jun  5 01:03:55.472: INFO: Pod "pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183359577s
STEP: Saw pod success
Jun  5 01:03:55.473: INFO: Pod "pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892" satisfied condition "Succeeded or Failed"
Jun  5 01:03:55.532: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892 container secret-volume-test: <nil>
STEP: delete the pod
Jun  5 01:03:55.642: INFO: Waiting for pod pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892 to disappear
Jun  5 01:03:55.695: INFO: Pod pod-secrets-8529ccfc-e057-480c-9dac-6a50c3358892 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 5 lines ...
• [SLOW TEST:5.184 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":103,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 16 lines ...
Jun  5 01:03:10.612: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfsntlm8] to have phase Bound
Jun  5 01:03:10.664: INFO: PersistentVolumeClaim nfsntlm8 found but phase is Pending instead of Bound.
Jun  5 01:03:12.718: INFO: PersistentVolumeClaim nfsntlm8 found but phase is Pending instead of Bound.
Jun  5 01:03:14.770: INFO: PersistentVolumeClaim nfsntlm8 found and phase=Bound (4.158819219s)
STEP: Creating pod pod-subpath-test-dynamicpv-w5sw
STEP: Creating a pod to test atomic-volume-subpath
Jun  5 01:03:14.936: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-w5sw" in namespace "provisioning-1185" to be "Succeeded or Failed"
Jun  5 01:03:15.002: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Pending", Reason="", readiness=false. Elapsed: 66.390146ms
Jun  5 01:03:17.054: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118362167s
Jun  5 01:03:19.112: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175895552s
Jun  5 01:03:21.165: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Running", Reason="", readiness=true. Elapsed: 6.228561408s
Jun  5 01:03:23.217: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Running", Reason="", readiness=true. Elapsed: 8.280842082s
Jun  5 01:03:25.269: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Running", Reason="", readiness=true. Elapsed: 10.333177026s
... skipping 3 lines ...
Jun  5 01:03:33.484: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Running", Reason="", readiness=true. Elapsed: 18.548009471s
Jun  5 01:03:35.537: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Running", Reason="", readiness=true. Elapsed: 20.6010489s
Jun  5 01:03:37.589: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Running", Reason="", readiness=true. Elapsed: 22.653458565s
Jun  5 01:03:39.642: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Running", Reason="", readiness=true. Elapsed: 24.706192218s
Jun  5 01:03:41.695: INFO: Pod "pod-subpath-test-dynamicpv-w5sw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.758778149s
STEP: Saw pod success
Jun  5 01:03:41.695: INFO: Pod "pod-subpath-test-dynamicpv-w5sw" satisfied condition "Succeeded or Failed"
Jun  5 01:03:41.748: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-subpath-test-dynamicpv-w5sw container test-container-subpath-dynamicpv-w5sw: <nil>
STEP: delete the pod
Jun  5 01:03:41.862: INFO: Waiting for pod pod-subpath-test-dynamicpv-w5sw to disappear
Jun  5 01:03:41.914: INFO: Pod pod-subpath-test-dynamicpv-w5sw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-w5sw
Jun  5 01:03:41.915: INFO: Deleting pod "pod-subpath-test-dynamicpv-w5sw" in namespace "provisioning-1185"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":10,"skipped":73,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:56.547: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Jun  5 01:03:48.395: INFO: PersistentVolumeClaim pvc-k6h6d found but phase is Pending instead of Bound.
Jun  5 01:03:50.446: INFO: PersistentVolumeClaim pvc-k6h6d found and phase=Bound (8.254940943s)
Jun  5 01:03:50.446: INFO: Waiting up to 3m0s for PersistentVolume local-jzxf9 to have phase Bound
Jun  5 01:03:50.517: INFO: PersistentVolume local-jzxf9 found and phase=Bound (71.183038ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-tqfw
STEP: Creating a pod to test exec-volume-test
Jun  5 01:03:50.690: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-tqfw" in namespace "volume-5903" to be "Succeeded or Failed"
Jun  5 01:03:50.744: INFO: Pod "exec-volume-test-preprovisionedpv-tqfw": Phase="Pending", Reason="", readiness=false. Elapsed: 53.929455ms
Jun  5 01:03:52.795: INFO: Pod "exec-volume-test-preprovisionedpv-tqfw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105091608s
Jun  5 01:03:54.846: INFO: Pod "exec-volume-test-preprovisionedpv-tqfw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156392914s
STEP: Saw pod success
Jun  5 01:03:54.846: INFO: Pod "exec-volume-test-preprovisionedpv-tqfw" satisfied condition "Succeeded or Failed"
Jun  5 01:03:54.897: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod exec-volume-test-preprovisionedpv-tqfw container exec-container-preprovisionedpv-tqfw: <nil>
STEP: delete the pod
Jun  5 01:03:55.007: INFO: Waiting for pod exec-volume-test-preprovisionedpv-tqfw to disappear
Jun  5 01:03:55.057: INFO: Pod exec-volume-test-preprovisionedpv-tqfw no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-tqfw
Jun  5 01:03:55.057: INFO: Deleting pod "exec-volume-test-preprovisionedpv-tqfw" in namespace "volume-5903"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "services-2547" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":11,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:57.058: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 100 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:173
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":6,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:02:35.756: INFO: >>> kubeConfig: /root/.kube/config
... skipping 67 lines ...
Jun  5 01:03:45.781: INFO: Waiting for pod aws-client to disappear
Jun  5 01:03:45.833: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Jun  5 01:03:45.833: INFO: Deleting PersistentVolumeClaim "pvc-ffjsl"
Jun  5 01:03:45.886: INFO: Deleting PersistentVolume "aws-h9tt5"
Jun  5 01:03:46.301: INFO: Couldn't delete PD "aws://us-west-1a/vol-0ed9928dc4a8a6c40", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ed9928dc4a8a6c40 is currently attached to i-0001a4645880ec32d
	status code: 400, request id: 6ab4c8d2-fa9c-44c5-8aba-9b06188f580e
Jun  5 01:03:51.646: INFO: Couldn't delete PD "aws://us-west-1a/vol-0ed9928dc4a8a6c40", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ed9928dc4a8a6c40 is currently attached to i-0001a4645880ec32d
	status code: 400, request id: 1ec9ba6c-65b0-4033-92d1-7f396030936f
Jun  5 01:03:57.005: INFO: Successfully deleted PD "aws://us-west-1a/vol-0ed9928dc4a8a6c40".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:03:57.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4505" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":7,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:57.126: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 22 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:56.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jun  5 01:03:57.227: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:us-west-1a]
Jun  5 01:03:57.227: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jun  5 01:03:57.227: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 110 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1304
------------------------------
... skipping 120 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":39,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:58.082: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 112 lines ...
Jun  5 01:03:58.902: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.378 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 46 lines ...
Jun  5 01:03:33.460: INFO: PersistentVolumeClaim pvc-gvqs9 found but phase is Pending instead of Bound.
Jun  5 01:03:35.511: INFO: PersistentVolumeClaim pvc-gvqs9 found and phase=Bound (8.25462966s)
Jun  5 01:03:35.511: INFO: Waiting up to 3m0s for PersistentVolume local-sr85d to have phase Bound
Jun  5 01:03:35.562: INFO: PersistentVolume local-sr85d found and phase=Bound (51.180856ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zqvl
STEP: Creating a pod to test atomic-volume-subpath
Jun  5 01:03:35.722: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zqvl" in namespace "provisioning-629" to be "Succeeded or Failed"
Jun  5 01:03:35.775: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Pending", Reason="", readiness=false. Elapsed: 53.522599ms
Jun  5 01:03:37.829: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10762025s
Jun  5 01:03:39.881: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 4.158907377s
Jun  5 01:03:41.932: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 6.209953989s
Jun  5 01:03:43.983: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 8.261234131s
Jun  5 01:03:46.036: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 10.313898809s
Jun  5 01:03:48.087: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 12.365235661s
Jun  5 01:03:50.143: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 14.421722s
Jun  5 01:03:52.194: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 16.472648326s
Jun  5 01:03:54.252: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 18.529893256s
Jun  5 01:03:56.307: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Running", Reason="", readiness=true. Elapsed: 20.584968687s
Jun  5 01:03:58.359: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.637209692s
STEP: Saw pod success
Jun  5 01:03:58.359: INFO: Pod "pod-subpath-test-preprovisionedpv-zqvl" satisfied condition "Succeeded or Failed"
Jun  5 01:03:58.409: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod pod-subpath-test-preprovisionedpv-zqvl container test-container-subpath-preprovisionedpv-zqvl: <nil>
STEP: delete the pod
Jun  5 01:03:58.526: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zqvl to disappear
Jun  5 01:03:58.577: INFO: Pod pod-subpath-test-preprovisionedpv-zqvl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zqvl
Jun  5 01:03:58.577: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zqvl" in namespace "provisioning-629"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:03:59.369: INFO: Only supported for providers [vsphere] (not aws)
... skipping 173 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating configMap with name projected-configmap-test-volume-map-0530f4b1-1180-4211-bd38-0c0f2bdc05cd
STEP: Creating a pod to test consume configMaps
Jun  5 01:03:58.260: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4" in namespace "projected-6787" to be "Succeeded or Failed"
Jun  5 01:03:58.310: INFO: Pod "pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4": Phase="Pending", Reason="", readiness=false. Elapsed: 50.574962ms
Jun  5 01:04:00.361: INFO: Pod "pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101615229s
Jun  5 01:04:02.413: INFO: Pod "pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152990458s
STEP: Saw pod success
Jun  5 01:04:02.413: INFO: Pod "pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4" satisfied condition "Succeeded or Failed"
Jun  5 01:04:02.464: INFO: Trying to get logs from node ip-172-20-56-177.us-west-1.compute.internal pod pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4 container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:04:02.588: INFO: Waiting for pod pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4 to disappear
Jun  5 01:04:02.644: INFO: Pod pod-projected-configmaps-39663a0b-4179-4e0e-801e-d525ab32ebb4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jun  5 01:04:02.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6787" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":84,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jun  5 01:03:53.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
STEP: Creating projection with secret that has name projected-secret-test-7cdfd153-5ac6-4330-ac41-eeb4dd6868ba
STEP: Creating a pod to test consume secrets
Jun  5 01:03:54.256: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962" in namespace "projected-8523" to be "Succeeded or Failed"
Jun  5 01:03:54.310: INFO: Pod "pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962": Phase="Pending", Reason="", readiness=false. Elapsed: 54.517484ms
Jun  5 01:03:56.362: INFO: Pod "pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106172648s
Jun  5 01:03:58.413: INFO: Pod "pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157712765s
Jun  5 01:04:00.465: INFO: Pod "pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209395932s
Jun  5 01:04:02.525: INFO: Pod "pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.269201419s
STEP: Saw pod success
Jun  5 01:04:02.525: INFO: Pod "pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962" satisfied condition "Succeeded or Failed"
Jun  5 01:04:02.576: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun  5 01:04:02.692: INFO: Waiting for pod pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962 to disappear
Jun  5 01:04:02.743: INFO: Pod pod-projected-secrets-3bf3d824-7b5a-4980-b300-d06726985962 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.959 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:04:02.867: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 212 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:04:04.747: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 131 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":12,"skipped":86,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 54 lines ...
Jun  5 01:03:41.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Jun  5 01:03:42.301: INFO: Waiting up to 5m0s for pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" in namespace "svcaccounts-662" to be "Succeeded or Failed"
Jun  5 01:03:42.352: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 51.132543ms
Jun  5 01:03:44.403: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102507269s
Jun  5 01:03:46.455: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154090251s
Jun  5 01:03:48.511: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209949395s
STEP: Saw pod success
Jun  5 01:03:48.511: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" satisfied condition "Succeeded or Failed"
Jun  5 01:03:48.566: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:03:48.690: INFO: Waiting for pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae to disappear
Jun  5 01:03:48.741: INFO: Pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae no longer exists
STEP: Creating a pod to test service account token: 
Jun  5 01:03:48.794: INFO: Waiting up to 5m0s for pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" in namespace "svcaccounts-662" to be "Succeeded or Failed"
Jun  5 01:03:48.845: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 50.890349ms
Jun  5 01:03:50.897: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102662455s
Jun  5 01:03:52.948: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154334551s
Jun  5 01:03:55.000: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.206222068s
STEP: Saw pod success
Jun  5 01:03:55.000: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" satisfied condition "Succeeded or Failed"
Jun  5 01:03:55.051: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:03:55.196: INFO: Waiting for pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae to disappear
Jun  5 01:03:55.247: INFO: Pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae no longer exists
STEP: Creating a pod to test service account token: 
Jun  5 01:03:55.300: INFO: Waiting up to 5m0s for pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" in namespace "svcaccounts-662" to be "Succeeded or Failed"
Jun  5 01:03:55.351: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 51.10375ms
Jun  5 01:03:57.402: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102551271s
Jun  5 01:03:59.458: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15810092s
Jun  5 01:04:01.510: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209704565s
Jun  5 01:04:03.562: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.261857211s
STEP: Saw pod success
Jun  5 01:04:03.562: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" satisfied condition "Succeeded or Failed"
Jun  5 01:04:03.613: INFO: Trying to get logs from node ip-172-20-35-190.us-west-1.compute.internal pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:04:03.724: INFO: Waiting for pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae to disappear
Jun  5 01:04:03.776: INFO: Pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae no longer exists
STEP: Creating a pod to test service account token: 
Jun  5 01:04:03.828: INFO: Waiting up to 5m0s for pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" in namespace "svcaccounts-662" to be "Succeeded or Failed"
Jun  5 01:04:03.879: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Pending", Reason="", readiness=false. Elapsed: 50.80414ms
Jun  5 01:04:05.930: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.10222371s
STEP: Saw pod success
Jun  5 01:04:05.930: INFO: Pod "test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae" satisfied condition "Succeeded or Failed"
Jun  5 01:04:05.982: INFO: Trying to get logs from node ip-172-20-52-198.us-west-1.compute.internal pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae container agnhost-container: <nil>
STEP: delete the pod
Jun  5 01:04:06.104: INFO: Waiting for pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae to disappear
Jun  5 01:04:06.156: INFO: Pod test-pod-0efeb203-86a1-4945-9efc-95f5a2b53aae no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:24.272 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":123,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:130
Jun  5 01:04:06.278: INFO: Only supported for providers [vsphere] (not aws)
... skipping 32545 lines ...






e\nI0605 01:08:00.555032       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-2541/busybox-13be5c5b-6654-48d7-95f9-cac6d48e6474\" objectUID=be5d4de3-cae2-4301-936c-08a1e9c0ec2b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0605 01:08:00.583951       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.5.237).\nI0605 01:08:00.584695       1 event.go:291] \"Event occurred\" object=\"volume-expand-5848-5503/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0605 01:08:00.634508       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.186.113).\nI0605 01:08:00.693460       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.186.113).\nI0605 01:08:00.694210       1 event.go:291] \"Event occurred\" object=\"volume-expand-5848-5503/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0605 01:08:00.857766       1 event.go:291] \"Event occurred\" object=\"volume-expand-5848/csi-hostpathn7cs4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5848\\\" or manually created by system administrator\"\nI0605 01:08:00.975321       1 namespace_controller.go:185] Namespace has been deleted volume-expand-6982\nI0605 01:08:01.124327       1 namespace_controller.go:185] Namespace has been deleted projected-4913\nI0605 01:08:01.131847       1 namespace_controller.go:185] Namespace has been deleted resourcequota-2170\nI0605 01:08:01.148262       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.244.35).\nI0605 01:08:01.308300       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.214.159).\nE0605 01:08:01.371964       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:01.529241       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.5.237).\nE0605 01:08:01.548766       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0605 01:08:01.781196       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-9780/default: secrets \"default-token-zfl2j\" is forbidden: unable to create new content in namespace emptydir-9780 because it is being terminated\nI0605 01:08:01.968113       1 namespace_controller.go:185] Namespace has been deleted watch-5180\nI0605 01:08:02.171634       1 aws.go:2037] Releasing in-process attachment entry: cu -> volume vol-054c9800bc3642524\nI0605 01:08:02.171688       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") from node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:02.171709       1 actual_state_of_world.go:350] Volume \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\" is already added to attachedVolume list to node \"ip-172-20-52-198.us-west-1.compute.internal\", update device path \"/dev/xvdcu\"\nI0605 01:08:02.172027       1 event.go:291] \"Event occurred\" object=\"volume-5319/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-volume-0\\\" \"\nI0605 01:08:02.389194       1 namespace_controller.go:185] Namespace has been deleted gc-934\nE0605 01:08:02.483339       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-5365/default: secrets \"default-token-f7tsj\" is forbidden: unable to create new content in namespace kubectl-5365 because it is being terminated\nI0605 01:08:02.569616       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-9222/condition-test\" need=3 creating=3\nI0605 01:08:02.573139       1 namespace_controller.go:185] Namespace has been deleted provisioning-5973\nI0605 01:08:02.577771       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-ddwkp\"\nI0605 01:08:02.586211       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-n4lbd\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0605 01:08:02.589652       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-9222/condition-test\nI0605 01:08:02.589774       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-gld8l\"\nE0605 01:08:02.593922       1 replica_set.go:532] sync \"replication-controller-9222/condition-test\" failed with pods \"condition-test-n4lbd\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0605 01:08:02.594024       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-9222/condition-test\" need=3 creating=1\nI0605 01:08:02.597836       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-9222/condition-test\nI0605 01:08:02.598168       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-sxvjb\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0605 01:08:02.601240       1 replica_set.go:532] sync \"replication-controller-9222/condition-test\" failed with pods \"condition-test-sxvjb\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0605 01:08:02.601304       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-9222/condition-test\" need=3 creating=1\nI0605 01:08:02.602655       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-9222/condition-test\nI0605 01:08:02.602902       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-bxpp5\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0605 01:08:02.615051       1 replica_set.go:532] sync \"replication-controller-9222/condition-test\" failed with pods \"condition-test-bxpp5\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0605 01:08:02.615114       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-9222/condition-test\" need=3 creating=1\nI0605 01:08:02.616165       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-9222/condition-test\nE0605 01:08:02.616203       1 replica_set.go:532] sync \"replication-controller-9222/condition-test\" failed with pods \"condition-test-jwjq9\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0605 01:08:02.616377       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-jwjq9\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0605 01:08:02.635224       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-9222/condition-test\" need=3 creating=1\nI0605 01:08:02.636358       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-9222/condition-test\nE0605 01:08:02.636393       1 replica_set.go:532] sync \"replication-controller-9222/condition-test\" failed with pods \"condition-test-stt4b\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0605 01:08:02.636428       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-stt4b\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0605 01:08:02.716594       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-9222/condition-test\" need=3 creating=1\nI0605 01:08:02.718245       1 replica_set.go:584] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-9222/condition-test\nE0605 01:08:02.718289       1 replica_set.go:532] sync \"replication-controller-9222/condition-test\" failed with pods \"condition-test-pzcmw\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0605 01:08:02.718488       1 event.go:291] \"Event occurred\" object=\"replication-controller-9222/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-pzcmw\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0605 01:08:02.737540       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8411/pvc-nrgbh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8411\\\" or manually created by system administrator\"\nI0605 01:08:02.765292       1 pv_controller.go:864] volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" entered phase \"Bound\"\nI0605 01:08:02.765323       1 pv_controller.go:967] volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" bound to claim \"csi-mock-volumes-8411/pvc-nrgbh\"\nI0605 01:08:02.790134       1 pv_controller.go:808] claim \"csi-mock-volumes-8411/pvc-nrgbh\" entered phase \"Bound\"\nI0605 01:08:02.844820       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.94.17).\nI0605 01:08:02.912678       1 event.go:291] \"Event occurred\" object=\"provisioning-8636-5609/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0605 01:08:02.912922       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.94.17).\nE0605 01:08:03.001104       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nI0605 01:08:03.022232       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.41.16).\nI0605 01:08:03.061468       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8411^4\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:03.092545       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.41.16).\nI0605 01:08:03.093555       1 event.go:291] \"Event occurred\" object=\"provisioning-8636-5609/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0605 01:08:03.102097       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8411^4\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:03.102349       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8411/pvc-volume-tester-4cnrj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\\\" \"\nI0605 01:08:03.138884       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.163.128).\nI0605 01:08:03.211042       1 event.go:291] \"Event occurred\" object=\"provisioning-8636-5609/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0605 01:08:03.211286       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.163.128).\nI0605 01:08:03.276239       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.34.20).\nI0605 01:08:03.279484       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-8199\nI0605 01:08:03.347679       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.34.20).\nI0605 01:08:03.348640       1 event.go:291] \"Event occurred\" object=\"provisioning-8636-5609/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nE0605 01:08:03.366907       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nI0605 01:08:03.409545       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.39.51).\nI0605 01:08:03.441793       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-2428/netserver-0\" objectUID=89ee6097-723f-4e9f-912d-899d0fa80d56 kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:03.448833       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-2428/netserver-0\" objectUID=89ee6097-723f-4e9f-912d-899d0fa80d56 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0605 01:08:03.453265       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-2428/netserver-1\" objectUID=b396f579-3674-438d-bb2c-3aa3c202e316 kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:03.464165       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-2428/netserver-1\" objectUID=b396f579-3674-438d-bb2c-3aa3c202e316 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0605 01:08:03.467042       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-2428/netserver-2\" objectUID=40946aa4-3a2c-49ff-ad67-21661f9c5095 kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:03.483303       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.39.51).\nI0605 01:08:03.483636       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-2428/netserver-2\" objectUID=40946aa4-3a2c-49ff-ad67-21661f9c5095 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0605 01:08:03.483926       1 event.go:291] \"Event occurred\" object=\"provisioning-8636-5609/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0605 01:08:03.494779       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-2428/netserver-3\" objectUID=695434a9-d374-4812-aebe-55d4a87e23e1 kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:03.506179       1 garbagecollector.go:580] \"Deleting object\" object=\"nettest-2428/netserver-3\" objectUID=695434a9-d374-4812-aebe-55d4a87e23e1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0605 01:08:03.658904       1 event.go:291] \"Event occurred\" object=\"provisioning-8636/csi-hostpathhnl6b\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-8636\\\" or manually created by system administrator\"\nE0605 01:08:03.679852       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-2428/default: secrets \"default-token-g6hl5\" is forbidden: unable to create new content in namespace nettest-2428 because it is being terminated\nE0605 01:08:03.707016       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nE0605 01:08:03.854695       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nI0605 01:08:03.885764       1 pv_controller.go:915] claim \"provisioning-8236/pvc-wjr8t\" bound to volume \"local-pzfjc\"\nI0605 01:08:03.886111       1 event.go:291] \"Event occurred\" object=\"volume-expand-5848/csi-hostpathn7cs4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5848\\\" or manually created by system administrator\"\nI0605 01:08:03.886153       1 event.go:291] \"Event occurred\" object=\"provisioning-8636/csi-hostpathhnl6b\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-8636\\\" or manually created by system administrator\"\nI0605 01:08:03.898050       1 pv_controller.go:864] volume \"local-pzfjc\" entered phase \"Bound\"\nI0605 01:08:03.898076       1 pv_controller.go:967] volume \"local-pzfjc\" bound to claim \"provisioning-8236/pvc-wjr8t\"\nI0605 01:08:03.910190       1 pv_controller.go:808] claim \"provisioning-8236/pvc-wjr8t\" entered phase \"Bound\"\nE0605 01:08:04.013194       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nI0605 01:08:04.030092       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.41.16).\nI0605 01:08:04.111376       1 namespace_controller.go:185] Namespace has been deleted emptydir-8267\nI0605 01:08:04.145963       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.163.128).\nI0605 01:08:04.156118       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.244.35).\nI0605 01:08:04.209291       1 pv_controller.go:864] volume \"pvc-b6b7356a-75e3-425f-8c51-fcb5bc8b9a5e\" entered phase \"Bound\"\nI0605 01:08:04.209324       1 pv_controller.go:967] volume \"pvc-b6b7356a-75e3-425f-8c51-fcb5bc8b9a5e\" bound to claim \"volume-expand-5848/csi-hostpathn7cs4\"\nI0605 01:08:04.219878       1 pv_controller.go:808] claim \"volume-expand-5848/csi-hostpathn7cs4\" entered phase \"Bound\"\nE0605 01:08:04.225077       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nI0605 01:08:04.289689       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.34.20).\nI0605 01:08:04.413660       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.39.51).\nE0605 01:08:04.476760       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nI0605 01:08:04.625791       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.41.32).\nW0605 01:08:04.625924       1 utils.go:323] Service services-9896/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0605 01:08:04.757336       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.186.113).\nE0605 01:08:04.957113       1 namespace_controller.go:162] deletion of namespace configmap-6245 failed: unexpected items still remain in namespace: configmap-6245 for gvr: /v1, Resource=pods\nI0605 01:08:05.126699       1 namespace_controller.go:185] Namespace has been deleted emptydir-8921\nI0605 01:08:05.165756       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.244.35).\nI0605 01:08:05.560307       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.135.54).\nI0605 01:08:05.770228       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.186.113).\nI0605 01:08:05.944605       1 namespace_controller.go:185] Namespace has been deleted volume-1309\nE0605 01:08:05.989913       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-3605/default: secrets \"default-token-vmzth\" is forbidden: unable to create new content in namespace emptydir-3605 because it is being terminated\nI0605 01:08:06.227348       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0605 01:08:06.338426       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0605 01:08:06.398412       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-f360dfd3-6e5c-4559-afa6-94d1b46a9f83\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-7635^6754ca83-c59a-11eb-9b72-e688462d0f63\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:06.400605       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-f360dfd3-6e5c-4559-afa6-94d1b46a9f83\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-7635^6754ca83-c59a-11eb-9b72-e688462d0f63\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:06.403082       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-f360dfd3-6e5c-4559-afa6-94d1b46a9f83\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-7635^6754ca83-c59a-11eb-9b72-e688462d0f63\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:06.760540       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.5.237).\nI0605 01:08:06.811609       1 namespace_controller.go:185] Namespace has been deleted emptydir-9780\nI0605 01:08:07.044942       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:07.099427       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-9896/service-headless-toggled\" need=3 creating=3\nI0605 01:08:07.103807       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:07.104762       1 event.go:291] \"Event occurred\" object=\"services-9896/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-w4vsx\"\nI0605 01:08:07.111408       1 event.go:291] \"Event occurred\" object=\"services-9896/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-rhrmv\"\nI0605 01:08:07.115609       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:07.118627       1 event.go:291] \"Event occurred\" object=\"services-9896/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-pq7xq\"\nI0605 01:08:07.122039       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:07.533074       1 namespace_controller.go:185] Namespace has been deleted kubectl-5365\nI0605 01:08:07.559110       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.214.159).\nI0605 01:08:07.747577       1 glusterfs.go:734] allocated GID 2000 for PVC pvc-kmhfg\nI0605 01:08:07.758750       1 glusterfs.go:793] create volume of size 2GiB\nE0605 01:08:07.944158       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-9222/default: secrets \"default-token-zbmbp\" is forbidden: unable to create new content in namespace replication-controller-9222 because it is being terminated\nI0605 01:08:07.946574       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-9222/condition-test-ddwkp\" objectUID=6450d442-d195-48b4-a7bf-8850e34b2b00 kind=\"Pod\" virtual=false\nI0605 01:08:07.946858       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-9222/condition-test-gld8l\" objectUID=689ae178-ee4e-4944-bb4e-0344132eedae kind=\"Pod\" virtual=false\nI0605 01:08:07.950307       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-9222/condition-test-gld8l\" objectUID=689ae178-ee4e-4944-bb4e-0344132eedae kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:07.950931       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-9222/condition-test-ddwkp\" objectUID=6450d442-d195-48b4-a7bf-8850e34b2b00 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:07.997890       1 pv_controller.go:864] volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" entered phase \"Bound\"\nI0605 01:08:07.997924       1 pv_controller.go:967] volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" bound to claim \"provisioning-8636/csi-hostpathhnl6b\"\nI0605 01:08:08.005455       1 pv_controller.go:808] claim \"provisioning-8636/csi-hostpathhnl6b\" entered phase \"Bound\"\nI0605 01:08:08.048597       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:08.056951       1 namespace_controller.go:185] Namespace has been deleted deployment-3276\nI0605 01:08:08.063902       1 resource_quota_controller.go:307] Resource quota has been deleted replication-controller-9222/condition-test\nI0605 01:08:08.145842       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.94.17).\nI0605 01:08:08.178115       1 glusterfs.go:824] volume with size 2 and name vol_60c63768a93d84835be8473bb493fb58 created\nI0605 01:08:08.193959       1 pv_controller.go:1652] volume \"pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8\" provisioned for claim \"volume-provisioning-2552/pvc-kmhfg\"\nI0605 01:08:08.194200       1 event.go:291] \"Event occurred\" object=\"volume-provisioning-2552/pvc-kmhfg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8 using kubernetes.io/glusterfs\"\nI0605 01:08:08.199055       1 pv_controller.go:864] volume \"pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8\" entered phase \"Bound\"\nI0605 01:08:08.199086       1 pv_controller.go:967] volume \"pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8\" bound to claim \"volume-provisioning-2552/pvc-kmhfg\"\nI0605 01:08:08.206926       1 pv_controller.go:808] claim \"volume-provisioning-2552/pvc-kmhfg\" entered phase \"Bound\"\nI0605 01:08:08.263939       1 pvc_protection_controller.go:291] PVC volume-expand-7635/csi-hostpaths568k is unused\nI0605 01:08:08.270915       1 pv_controller.go:638] volume \"pvc-f360dfd3-6e5c-4559-afa6-94d1b46a9f83\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:08.274266       1 pv_controller.go:864] volume \"pvc-f360dfd3-6e5c-4559-afa6-94d1b46a9f83\" entered phase \"Released\"\nI0605 01:08:08.276431       1 pv_controller.go:1326] isVolumeReleased[pvc-f360dfd3-6e5c-4559-afa6-94d1b46a9f83]: volume is released\nI0605 01:08:08.298886       1 pv_controller_base.go:504] deletion of claim \"volume-expand-7635/csi-hostpaths568k\" was already processed\nI0605 01:08:08.565896       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-5848-5503/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.214.159).\nI0605 01:08:08.648738       1 namespace_controller.go:185] Namespace has been deleted sysctl-8125\nI0605 01:08:08.727107       1 namespace_controller.go:185] Namespace has been deleted nettest-2428\nE0605 01:08:08.781499       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-3282/pvc-762w9: storageclass.storage.k8s.io \"provisioning-3282\" not found\nI0605 01:08:08.781781       1 event.go:291] \"Event occurred\" object=\"provisioning-3282/pvc-762w9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3282\\\" not found\"\nI0605 01:08:08.892267       1 pv_controller.go:864] volume \"local-pwtnt\" entered phase \"Available\"\nI0605 01:08:09.154366       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.94.17).\nE0605 01:08:09.539700       1 pv_controller.go:1437] error finding provisioning plugin for claim volumemode-5078/pvc-wtjst: storageclass.storage.k8s.io \"volumemode-5078\" not found\nI0605 01:08:09.539948       1 event.go:291] \"Event occurred\" object=\"volumemode-5078/pvc-wtjst\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-5078\\\" not found\"\nI0605 01:08:09.597483       1 pv_controller.go:864] volume \"local-87ss4\" entered phase \"Available\"\nI0605 01:08:09.736645       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.34.20).\nI0605 01:08:10.006480       1 pvc_protection_controller.go:291] PVC volume-provisioning-2552/pvc-kmhfg is unused\nI0605 01:08:10.012536       1 pv_controller.go:638] volume \"pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:10.015261       1 pv_controller.go:864] volume \"pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8\" entered phase \"Released\"\nI0605 01:08:10.018026       1 pv_controller.go:1326] isVolumeReleased[pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8]: volume is released\nI0605 01:08:10.018046       1 glusterfs.go:636] delete volume vol_60c63768a93d84835be8473bb493fb58\nI0605 01:08:10.118104       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8636^7d20a48d-c59a-11eb-80fa-5ed3c4c0dd76\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:10.128920       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8636^7d20a48d-c59a-11eb-80fa-5ed3c4c0dd76\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:10.129068       1 event.go:291] \"Event occurred\" object=\"provisioning-8636/pod-subpath-test-dynamicpv-88rr\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\\\" \"\nI0605 01:08:10.135294       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.163.128).\nI0605 01:08:10.425896       1 glusterfs.go:679] volume vol_60c63768a93d84835be8473bb493fb58 deleted successfully\nI0605 01:08:10.437780       1 garbagecollector.go:471] \"Processing object\" object=\"volume-provisioning-2552/glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8-ssd9r\" objectUID=48b9f027-ccfd-4534-89c9-dcccbba1f7af kind=\"EndpointSlice\" virtual=false\nI0605 01:08:10.441543       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-provisioning-2552/glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8-ssd9r\" objectUID=48b9f027-ccfd-4534-89c9-dcccbba1f7af kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:10.443400       1 glusterfs.go:932] service/endpoint: volume-provisioning-2552/glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8 deleted successfully\nI0605 01:08:10.443422       1 glusterfs.go:701] endpoint volume-provisioning-2552/glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8 is deleted successfully \nI0605 01:08:10.443458       1 pv_controller.go:1421] volume \"pvc-289f2918-f2d1-46fd-8b34-366aedc8cea8\" deleted\nW0605 01:08:10.446132       1 endpointslicemirroring_controller.go:255] Error mirroring EndpointSlices for \"volume-provisioning-2552/glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8\" Endpoints, retrying. Error: Error(s) deleting 1/1 EndpointSlices for volume-provisioning-2552/glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8 Endpoints, including: endpointslices.discovery.k8s.io \"glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8-ssd9r\" not found\nE0605 01:08:10.446886       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8-ssd9r\", UID:\"48b9f027-ccfd-4534-89c9-dcccbba1f7af\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"volume-provisioning-2552\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Endpoints\", Name:\"glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8\", UID:\"4f407756-5c47-45a7-930e-9358c4461d55\", Controller:(*bool)(0xc002f18a5c), BlockOwnerDeletion:(*bool)(0xc002f18a5d)}}}: endpointslices.discovery.k8s.io \"glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8-ssd9r\" not found\nI0605 01:08:10.452207       1 garbagecollector.go:471] \"Processing object\" object=\"volume-provisioning-2552/glusterfs-dynamic-289f2918-f2d1-46fd-8b34-366aedc8cea8-ssd9r\" objectUID=48b9f027-ccfd-4534-89c9-dcccbba1f7af kind=\"EndpointSlice\" virtual=false\nI0605 01:08:10.461352       1 pv_controller_base.go:504] deletion of claim \"volume-provisioning-2552/pvc-kmhfg\" was already processed\nI0605 01:08:10.747615       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.34.20).\nI0605 01:08:10.775482       1 namespace_controller.go:185] Namespace has been deleted configmap-6245\nI0605 01:08:10.786632       1 namespace_controller.go:185] Namespace has been deleted projected-2830\nI0605 01:08:10.857023       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4270/httpd\" objectUID=af442ac5-2bd8-42e1-a3cd-cfc960165334 kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:10.859659       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4270/httpd\" objectUID=af442ac5-2bd8-42e1-a3cd-cfc960165334 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0605 01:08:10.859883       1 namespace_controller.go:185] Namespace has been deleted container-probe-2541\nI0605 01:08:11.080229       1 namespace_controller.go:185] Namespace has been deleted emptydir-3605\nI0605 01:08:11.144292       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.163.128).\nI0605 01:08:11.536429       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.39.51).\nI0605 01:08:11.895918       1 namespace_controller.go:185] Namespace has been deleted downward-api-3720\nI0605 01:08:12.522052       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9536/pod-324c7135-fc9b-4963-9668-001ef0595ce2 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-klnpz pvc- persistent-local-volumes-test-9536  44113849-4fb8-489d-9235-693832a0bab7 28821 0 2021-06-05 01:07:55 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc00072aa68 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvlvspt,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9536,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:12.522133       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9536/pvc-klnpz because it is still being used\nI0605 01:08:12.624950       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:12.739397       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.41.16).\nI0605 01:08:12.764329       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-128/pod-66686a11-4119-454d-bb70-3623edefd20d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xk5ll pvc- persistent-local-volumes-test-128  35bc4df3-711a-43e2-bf3e-43b2ee4fa963 28834 0 2021-06-05 01:07:51 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc001e32cd8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvztvl2,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-128,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:12.764391       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-128/pvc-xk5ll because it is still being used\nI0605 01:08:13.076952       1 namespace_controller.go:185] Namespace has been deleted replication-controller-9222\nI0605 01:08:13.549191       1 namespace_controller.go:185] Namespace has been deleted security-context-test-3982\nI0605 01:08:13.630095       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:13.744724       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-8636-5609/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.41.16).\nE0605 01:08:14.041719       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:14.153792       1 namespace_controller.go:185] Namespace has been deleted kubectl-7617\nE0605 01:08:14.644285       1 pv_controller.go:1437] error finding provisioning plugin for claim volumemode-9417/pvc-2wcz6: storageclass.storage.k8s.io \"volumemode-9417\" not found\nI0605 01:08:14.644581       1 event.go:291] \"Event occurred\" object=\"volumemode-9417/pvc-2wcz6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-9417\\\" not found\"\nI0605 01:08:14.784467       1 pv_controller.go:864] volume \"aws-hmc78\" entered phase \"Available\"\nI0605 01:08:14.823925       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9536/pod-324c7135-fc9b-4963-9668-001ef0595ce2 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-klnpz pvc- persistent-local-volumes-test-9536  44113849-4fb8-489d-9235-693832a0bab7 28821 0 2021-06-05 01:07:55 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc00072aa68 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvlvspt,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9536,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:14.824038       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9536/pvc-klnpz because it is still being used\nI0605 01:08:14.983173       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:15.223534       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-128/pod-66686a11-4119-454d-bb70-3623edefd20d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xk5ll pvc- persistent-local-volumes-test-128  35bc4df3-711a-43e2-bf3e-43b2ee4fa963 28834 0 2021-06-05 01:07:51 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc001e32cd8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvztvl2,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-128,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:15.223603       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-128/pvc-xk5ll because it is still being used\nI0605 01:08:15.413508       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:15.432207       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-128/pod-66686a11-4119-454d-bb70-3623edefd20d uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xk5ll pvc- persistent-local-volumes-test-128  35bc4df3-711a-43e2-bf3e-43b2ee4fa963 28834 0 2021-06-05 01:07:51 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc001e32cd8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvztvl2,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-128,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:15.432272       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-128/pvc-xk5ll because it is still being used\nI0605 01:08:15.435790       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-128/pod-400a2bf9-c596-4056-b996-f0c68543c889 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xk5ll pvc- persistent-local-volumes-test-128  35bc4df3-711a-43e2-bf3e-43b2ee4fa963 28834 0 2021-06-05 01:07:51 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc001e32cd8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvztvl2,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-128,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:15.435844       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-128/pvc-xk5ll because it is still being used\nI0605 01:08:15.578197       1 namespace_controller.go:185] Namespace has been deleted provisioning-5973-632\nI0605 01:08:15.622192       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-attacher-8q2vb\" objectUID=60977751-0b25-45b4-963a-cb242b991969 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:15.625582       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-attacher-8q2vb\" objectUID=60977751-0b25-45b4-963a-cb242b991969 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:15.682192       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-7635-260/csi-hostpath-attacher\nI0605 01:08:15.682214       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-attacher-79fd668497\" objectUID=d953dc66-1213-4572-a56a-cd7e52c0e1f0 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:15.682242       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-attacher-0\" objectUID=29de8645-f8b7-4e00-9378-9b12ecc11beb kind=\"Pod\" virtual=false\nI0605 01:08:15.684695       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-attacher-79fd668497\" objectUID=d953dc66-1213-4572-a56a-cd7e52c0e1f0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:15.685026       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-attacher-0\" objectUID=29de8645-f8b7-4e00-9378-9b12ecc11beb kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:15.791842       1 pvc_protection_controller.go:291] PVC provisioning-4992/aws4c5km is unused\nI0605 01:08:15.805465       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpathplugin-jlpw4\" objectUID=da31ae83-30fd-4d42-9c3a-a1c95a629d36 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:15.814260       1 pv_controller.go:638] volume \"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:15.814511       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpathplugin-jlpw4\" objectUID=da31ae83-30fd-4d42-9c3a-a1c95a629d36 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:15.822527       1 pv_controller.go:864] volume \"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" entered phase \"Released\"\nI0605 01:08:15.830038       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-128/pod-400a2bf9-c596-4056-b996-f0c68543c889 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xk5ll pvc- persistent-local-volumes-test-128  35bc4df3-711a-43e2-bf3e-43b2ee4fa963 28834 0 2021-06-05 01:07:51 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc001e32cd8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvztvl2,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-128,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:15.830114       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-128/pvc-xk5ll because it is still being used\nI0605 01:08:15.831528       1 pv_controller.go:1326] isVolumeReleased[pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52]: volume is released\nI0605 01:08:15.885736       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpathplugin-0\" objectUID=3f93ed65-b5e5-4d72-ba5c-26f726d05a6b kind=\"Pod\" virtual=false\nI0605 01:08:15.885972       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-7635-260/csi-hostpathplugin\nI0605 01:08:15.886040       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpathplugin-c57bfc67f\" objectUID=7db327c0-3682-4362-b64e-b8bacf943849 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:15.887824       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpathplugin-0\" objectUID=3f93ed65-b5e5-4d72-ba5c-26f726d05a6b kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:15.888103       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpathplugin-c57bfc67f\" objectUID=7db327c0-3682-4362-b64e-b8bacf943849 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:15.939090       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-provisioner-2qf77\" objectUID=40baf47a-2569-4ba2-b5c9-84759566cfb8 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:15.944909       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-provisioner-2qf77\" objectUID=40baf47a-2569-4ba2-b5c9-84759566cfb8 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:15.960541       1 aws_util.go:62] Error deleting EBS Disk volume aws://us-west-1a/vol-032a6d0de0c59f381: error deleting EBS volume \"vol-032a6d0de0c59f381\" since volume is currently attached to \"i-01bd6548e8f6bd7c1\"\nE0605 01:08:15.960600       1 goroutinemap.go:150] Operation for \"delete-pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52[833cf1b5-9987-4ed5-820d-45b08f167c78]\" failed. No retries permitted until 2021-06-05 01:08:16.46058168 +0000 UTC m=+841.393112101 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-032a6d0de0c59f381\\\" since volume is currently attached to \\\"i-01bd6548e8f6bd7c1\\\"\"\nI0605 01:08:15.960629       1 event.go:291] \"Event occurred\" object=\"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-032a6d0de0c59f381\\\" since volume is currently attached to \\\"i-01bd6548e8f6bd7c1\\\"\"\nI0605 01:08:15.990803       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:16.001971       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-provisioner-55cf56667c\" objectUID=78a066ec-3e5e-4902-b08f-9c9dea1e4df6 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:16.002301       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-7635-260/csi-hostpath-provisioner\nI0605 01:08:16.002346       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-provisioner-0\" objectUID=0248d533-7370-4cf0-a9df-9b0dfb34175a kind=\"Pod\" virtual=false\nI0605 01:08:16.004346       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-provisioner-0\" objectUID=0248d533-7370-4cf0-a9df-9b0dfb34175a kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:16.004364       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-provisioner-55cf56667c\" objectUID=78a066ec-3e5e-4902-b08f-9c9dea1e4df6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:16.022297       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-128/pod-400a2bf9-c596-4056-b996-f0c68543c889 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-xk5ll pvc- persistent-local-volumes-test-128  35bc4df3-711a-43e2-bf3e-43b2ee4fa963 28834 0 2021-06-05 01:07:51 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc001e32cd8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvztvl2,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-128,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:16.022367       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-128/pvc-xk5ll because it is still being used\nI0605 01:08:16.027611       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-128/pvc-xk5ll is unused\nI0605 01:08:16.036872       1 pv_controller.go:638] volume \"local-pvztvl2\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:16.041051       1 pv_controller.go:864] volume \"local-pvztvl2\" entered phase \"Released\"\nI0605 01:08:16.048047       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-128/pvc-xk5ll\" was already processed\nI0605 01:08:16.056062       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-resizer-njtmn\" objectUID=800a3e92-b10e-4068-8efe-a81120256ac1 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:16.058061       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-resizer-njtmn\" objectUID=800a3e92-b10e-4068-8efe-a81120256ac1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:16.121414       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-resizer-0\" objectUID=77e57966-5c50-48f8-b12d-fc63f057788f kind=\"Pod\" virtual=false\nI0605 01:08:16.121577       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-7635-260/csi-hostpath-resizer\nI0605 01:08:16.121651       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-resizer-6695476b66\" objectUID=b82dc1ed-ac9f-4d11-92f1-d7c279cff861 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:16.123506       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-resizer-6695476b66\" objectUID=b82dc1ed-ac9f-4d11-92f1-d7c279cff861 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:16.123634       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-resizer-0\" objectUID=77e57966-5c50-48f8-b12d-fc63f057788f kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:16.174785       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-snapshotter-zqz7z\" objectUID=70c5a3fd-5c66-497b-9de5-b25f5f774e9c kind=\"EndpointSlice\" virtual=false\nI0605 01:08:16.179158       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-snapshotter-zqz7z\" objectUID=70c5a3fd-5c66-497b-9de5-b25f5f774e9c kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:16.240321       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-snapshotter-0\" objectUID=e6603e9c-c388-4789-addd-7dd16021acf8 kind=\"Pod\" virtual=false\nI0605 01:08:16.240584       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-7635-260/csi-hostpath-snapshotter\nI0605 01:08:16.240630       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-7635-260/csi-hostpath-snapshotter-5f7f4cdf88\" objectUID=f9f2ec9b-19d3-4d19-ae19-5f295a8ba7f8 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:16.242467       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-snapshotter-0\" objectUID=e6603e9c-c388-4789-addd-7dd16021acf8 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:16.242700       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-7635-260/csi-hostpath-snapshotter-5f7f4cdf88\" objectUID=f9f2ec9b-19d3-4d19-ae19-5f295a8ba7f8 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:16.262512       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4496/pvc-mw7dc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4496\\\" or manually created by system administrator\"\nI0605 01:08:16.293765       1 pv_controller.go:864] volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" entered phase \"Bound\"\nI0605 01:08:16.293798       1 pv_controller.go:967] volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" bound to claim \"csi-mock-volumes-4496/pvc-mw7dc\"\nI0605 01:08:16.301716       1 pv_controller.go:808] claim \"csi-mock-volumes-4496/pvc-mw7dc\" entered phase \"Bound\"\nI0605 01:08:16.551108       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4496^4\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:16.590654       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4496^4\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:16.590931       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4496/pvc-volume-tester-jzcwf\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\\\" \"\nI0605 01:08:16.632081       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9536/pod-324c7135-fc9b-4963-9668-001ef0595ce2 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-klnpz pvc- persistent-local-volumes-test-9536  44113849-4fb8-489d-9235-693832a0bab7 28821 0 2021-06-05 01:07:55 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc00072aa68 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvlvspt,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9536,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:16.632155       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9536/pvc-klnpz because it is still being used\nI0605 01:08:16.675201       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-032a6d0de0c59f381\") on node \"ip-172-20-35-190.us-west-1.compute.internal\" \nI0605 01:08:16.677029       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-032a6d0de0c59f381\") on node \"ip-172-20-35-190.us-west-1.compute.internal\" \nI0605 01:08:16.823869       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9536/pod-324c7135-fc9b-4963-9668-001ef0595ce2 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-klnpz pvc- persistent-local-volumes-test-9536  44113849-4fb8-489d-9235-693832a0bab7 28821 0 2021-06-05 01:07:55 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc00072aa68 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvlvspt,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9536,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:16.823935       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9536/pvc-klnpz because it is still being used\nI0605 01:08:16.827561       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9536/pod-324c7135-fc9b-4963-9668-001ef0595ce2 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-klnpz pvc- persistent-local-volumes-test-9536  44113849-4fb8-489d-9235-693832a0bab7 28821 0 2021-06-05 01:07:55 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc00072aa68 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvlvspt,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9536,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:16.827621       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9536/pvc-klnpz because it is still being used\nI0605 01:08:17.825322       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-9536/pod-324c7135-fc9b-4963-9668-001ef0595ce2 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-klnpz pvc- persistent-local-volumes-test-9536  44113849-4fb8-489d-9235-693832a0bab7 28821 0 2021-06-05 01:07:55 +0000 UTC 2021-06-05 01:08:12 +0000 UTC 0xc00072aa68 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:07:55 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvlvspt,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-9536,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:17.825400       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-9536/pvc-klnpz because it is still being used\nI0605 01:08:17.830368       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-9536/pvc-klnpz is unused\nI0605 01:08:17.837160       1 pv_controller.go:638] volume \"local-pvlvspt\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:17.840441       1 pv_controller.go:864] volume \"local-pvlvspt\" entered phase \"Released\"\nI0605 01:08:17.845174       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-9536/pvc-klnpz\" was already processed\nI0605 01:08:18.624081       1 namespace_controller.go:185] Namespace has been deleted volume-expand-7635\nI0605 01:08:18.885965       1 pv_controller.go:915] claim \"volumemode-5078/pvc-wtjst\" bound to volume \"local-87ss4\"\nI0605 01:08:18.890340       1 pv_controller.go:1326] isVolumeReleased[pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52]: volume is released\nI0605 01:08:18.897214       1 pv_controller.go:864] volume \"local-87ss4\" entered phase \"Bound\"\nI0605 01:08:18.897242       1 pv_controller.go:967] volume \"local-87ss4\" bound to claim \"volumemode-5078/pvc-wtjst\"\nI0605 01:08:18.910171       1 pv_controller.go:808] claim \"volumemode-5078/pvc-wtjst\" entered phase \"Bound\"\nI0605 01:08:18.910315       1 pv_controller.go:915] claim \"volumemode-9417/pvc-2wcz6\" bound to volume \"aws-hmc78\"\nE0605 01:08:18.913007       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-128/default: secrets \"default-token-5lvjm\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-128 because it is being terminated\nI0605 01:08:18.918458       1 pv_controller.go:864] volume \"aws-hmc78\" entered phase \"Bound\"\nI0605 01:08:18.918485       1 pv_controller.go:967] volume \"aws-hmc78\" bound to claim \"volumemode-9417/pvc-2wcz6\"\nI0605 01:08:18.925344       1 pv_controller.go:808] claim \"volumemode-9417/pvc-2wcz6\" entered phase \"Bound\"\nI0605 01:08:18.925450       1 pv_controller.go:915] claim \"provisioning-3282/pvc-762w9\" bound to volume \"local-pwtnt\"\nI0605 01:08:18.932760       1 pv_controller.go:864] volume \"local-pwtnt\" entered phase \"Bound\"\nI0605 01:08:18.932782       1 pv_controller.go:967] volume \"local-pwtnt\" bound to claim \"provisioning-3282/pvc-762w9\"\nI0605 01:08:18.939434       1 pv_controller.go:808] claim \"provisioning-3282/pvc-762w9\" entered phase \"Bound\"\nI0605 01:08:19.065175       1 aws_util.go:62] Error deleting EBS Disk volume aws://us-west-1a/vol-032a6d0de0c59f381: error deleting EBS volume \"vol-032a6d0de0c59f381\" since volume is currently attached to \"i-01bd6548e8f6bd7c1\"\nE0605 01:08:19.065239       1 goroutinemap.go:150] Operation for \"delete-pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52[833cf1b5-9987-4ed5-820d-45b08f167c78]\" failed. No retries permitted until 2021-06-05 01:08:20.065217043 +0000 UTC m=+844.997747468 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-032a6d0de0c59f381\\\" since volume is currently attached to \\\"i-01bd6548e8f6bd7c1\\\"\"\nI0605 01:08:19.065381       1 event.go:291] \"Event occurred\" object=\"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-032a6d0de0c59f381\\\" since volume is currently attached to \\\"i-01bd6548e8f6bd7c1\\\"\"\nI0605 01:08:19.182077       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-hmc78\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0e7161a3f600b4800\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:19.218733       1 aws.go:2014] Assigned mount device bd -> volume vol-0e7161a3f600b4800\nI0605 01:08:19.555528       1 aws.go:2427] AttachVolume volume=\"vol-0e7161a3f600b4800\" instance=\"i-0001a4645880ec32d\" request returned {\n  AttachTime: 2021-06-05 01:08:19.542 +0000 UTC,\n  Device: \"/dev/xvdbd\",\n  InstanceId: \"i-0001a4645880ec32d\",\n  State: \"attaching\",\n  VolumeId: \"vol-0e7161a3f600b4800\"\n}\nI0605 01:08:19.720864       1 pvc_protection_controller.go:291] PVC provisioning-8236/pvc-wjr8t is unused\nI0605 01:08:19.744608       1 pv_controller.go:638] volume \"local-pzfjc\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:19.748376       1 pv_controller.go:864] volume \"local-pzfjc\" entered phase \"Released\"\nI0605 01:08:19.775139       1 pv_controller_base.go:504] deletion of claim \"provisioning-8236/pvc-wjr8t\" was already processed\nE0605 01:08:21.492187       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-7635-260/default: secrets \"default-token-6brmf\" is forbidden: unable to create new content in namespace volume-expand-7635-260 because it is being terminated\nI0605 01:08:21.662419       1 aws.go:2037] Releasing in-process attachment entry: bd -> volume vol-0e7161a3f600b4800\nI0605 01:08:21.662471       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"aws-hmc78\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0e7161a3f600b4800\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:21.662742       1 event.go:291] \"Event occurred\" object=\"volumemode-9417/pod-4fda01ce-af16-4ec9-8484-9de988e92130\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-hmc78\\\" \"\nI0605 01:08:21.767064       1 namespace_controller.go:185] Namespace has been deleted kubectl-4270\nE0605 01:08:21.988476       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:22.072389       1 aws.go:2291] Waiting for volume \"vol-032a6d0de0c59f381\" state: actual=detaching, desired=detached\nI0605 01:08:22.567880       1 namespace_controller.go:185] Namespace has been deleted nettest-6177\nI0605 01:08:23.358128       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9536\nI0605 01:08:23.477341       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-8411/pvc-nrgbh is unused\nI0605 01:08:23.486064       1 pv_controller.go:638] volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:23.489423       1 pv_controller.go:864] volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" entered phase \"Released\"\nI0605 01:08:23.493636       1 pv_controller.go:1326] isVolumeReleased[pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7]: volume is released\nI0605 01:08:23.708638       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-584/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0605 01:08:23.709360       1 event.go:291] \"Event occurred\" object=\"webhook-584/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0605 01:08:23.717915       1 event.go:291] \"Event occurred\" object=\"webhook-584/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-jfb6h\"\nI0605 01:08:23.723143       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-584/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:08:24.004318       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-128\nI0605 01:08:24.134235       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-06-05 01:07:42 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcp\",\n  InstanceId: \"i-01bd6548e8f6bd7c1\",\n  State: \"detaching\",\n  VolumeId: \"vol-032a6d0de0c59f381\"\n}\nI0605 01:08:24.134287       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-032a6d0de0c59f381\") on node \"ip-172-20-35-190.us-west-1.compute.internal\" \nE0605 01:08:25.131515       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-5262/default: secrets \"default-token-6cdwt\" is forbidden: unable to create new content in namespace security-context-5262 because it is being terminated\nE0605 01:08:25.524332       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-2205/pvc-qg48m: storageclass.storage.k8s.io \"volume-2205\" not found\nI0605 01:08:25.524590       1 event.go:291] \"Event occurred\" object=\"volume-2205/pvc-qg48m\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-2205\\\" not found\"\nI0605 01:08:25.589134       1 pv_controller.go:864] volume \"local-czz9f\" entered phase \"Available\"\nI0605 01:08:25.977505       1 pvc_protection_controller.go:291] PVC provisioning-3282/pvc-762w9 is unused\nI0605 01:08:25.984951       1 pv_controller.go:638] volume \"local-pwtnt\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:25.987826       1 pv_controller.go:864] volume \"local-pwtnt\" entered phase \"Released\"\nI0605 01:08:26.033768       1 pv_controller_base.go:504] deletion of claim \"provisioning-3282/pvc-762w9\" was already processed\nI0605 01:08:26.422361       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:26.431062       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:26.538565       1 namespace_controller.go:185] Namespace has been deleted volume-expand-7635-260\nE0605 01:08:27.481343       1 tokens_controller.go:262] error synchronizing serviceaccount svcaccounts-8938/default: secrets \"default-token-fcrpp\" is forbidden: unable to create new content in namespace svcaccounts-8938 because it is being terminated\nI0605 01:08:27.526647       1 pv_controller.go:864] volume \"local-pvnp9vm\" entered phase \"Available\"\nI0605 01:08:27.574435       1 pv_controller.go:915] claim \"persistent-local-volumes-test-922/pvc-ktlws\" bound to volume \"local-pvnp9vm\"\nI0605 01:08:27.581524       1 pv_controller.go:864] volume \"local-pvnp9vm\" entered phase \"Bound\"\nI0605 01:08:27.581552       1 pv_controller.go:967] volume \"local-pvnp9vm\" bound to claim \"persistent-local-volumes-test-922/pvc-ktlws\"\nI0605 01:08:27.589451       1 pv_controller.go:808] claim \"persistent-local-volumes-test-922/pvc-ktlws\" entered phase \"Bound\"\nI0605 01:08:27.916440       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-4601\nI0605 01:08:27.962797       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-584/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.180.9).\nI0605 01:08:28.192442       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4557-3599/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0605 01:08:28.337053       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-hmc78\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0e7161a3f600b4800\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:28.342283       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"aws-hmc78\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0e7161a3f600b4800\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:28.828211       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-583/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0605 01:08:28.828418       1 event.go:291] \"Event occurred\" object=\"webhook-583/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0605 01:08:28.840327       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-583/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:08:28.840619       1 event.go:291] \"Event occurred\" object=\"webhook-583/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-rmzb6\"\nI0605 01:08:29.047261       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8411^4\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:29.051063       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8411^4\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:29.072255       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-45d690cc-5037-4a41-82cc-6a3d2dabf0d7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8411^4\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:29.087649       1 pv_controller_base.go:504] deletion of claim \"csi-mock-volumes-8411/pvc-nrgbh\" was already processed\nE0605 01:08:29.349423       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:29.457224       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-584/e2e-test-webhook-vqgkv\" objectUID=cce9e964-ac8e-4613-aed1-65c4cea699c4 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:29.527668       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-584/sample-webhook-deployment-6bd9446d55\" objectUID=8fd0c8f6-08d5-40a9-83a4-5897111df268 kind=\"ReplicaSet\" virtual=false\nI0605 01:08:29.527736       1 deployment_controller.go:581] Deployment webhook-584/sample-webhook-deployment has been deleted\nI0605 01:08:29.658526       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4496^4\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:29.664614       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4496^4\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:29.674604       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4496^4\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:29.720530       1 event.go:291] \"Event occurred\" object=\"volume-expand-8876/awssrcc7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0605 01:08:29.818543       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-7781/netserver-0\" objectUID=0662b2b0-1fd0-47f6-9047-7d4ecd215aaa kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:29.825782       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-7781/netserver-1\" objectUID=de9f9397-dfd6-4adc-9f08-93e0a12671b6 kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:29.833948       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-7781/netserver-2\" objectUID=06821ef0-afaa-4e40-8018-b8bc372d594c kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:29.840044       1 garbagecollector.go:471] \"Processing object\" object=\"nettest-7781/netserver-3\" objectUID=6fb529bf-71e1-4c1f-8f1c-d6b99e96317b kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:30.012688       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-584/sample-webhook-deployment-6bd9446d55\" objectUID=8fd0c8f6-08d5-40a9-83a4-5897111df268 kind=\"ReplicaSet\" propagationPolicy=Background\nI0605 01:08:30.012961       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-584/e2e-test-webhook-vqgkv\" objectUID=cce9e964-ac8e-4613-aed1-65c4cea699c4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:30.019439       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-584/sample-webhook-deployment-6bd9446d55-jfb6h\" objectUID=6e0526b4-7dfc-40b4-90ce-2d6eafbb8ae5 kind=\"Pod\" virtual=false\nI0605 01:08:30.022066       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-584/sample-webhook-deployment-6bd9446d55-jfb6h\" objectUID=6e0526b4-7dfc-40b4-90ce-2d6eafbb8ae5 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:30.197265       1 namespace_controller.go:185] Namespace has been deleted security-context-5262\nI0605 01:08:30.458142       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-1856/liveness-e3da0cd7-1ac8-4735-a120-5e687470608d\" objectUID=3fb7a7ad-3912-4d6c-87fe-4aeab4a6360f kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:30.461633       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-1856/liveness-e3da0cd7-1ac8-4735-a120-5e687470608d\" objectUID=3fb7a7ad-3912-4d6c-87fe-4aeab4a6360f kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0605 01:08:30.514768       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-5135/pvc-k64n9: storageclass.storage.k8s.io \"volume-5135\" not found\nI0605 01:08:30.515053       1 event.go:291] \"Event occurred\" object=\"volume-5135/pvc-k64n9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-5135\\\" not found\"\nI0605 01:08:30.646090       1 pv_controller.go:864] volume \"aws-zlg7b\" entered phase \"Available\"\nI0605 01:08:30.654932       1 namespace_controller.go:185] Namespace has been deleted provisioning-8236\nI0605 01:08:30.718756       1 namespace_controller.go:185] Namespace has been deleted tables-4284\nI0605 01:08:30.961497       1 pvc_protection_controller.go:291] PVC volumemode-9417/pvc-2wcz6 is unused\nI0605 01:08:30.970089       1 pv_controller.go:638] volume \"aws-hmc78\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:30.973206       1 pv_controller.go:864] volume \"aws-hmc78\" entered phase \"Released\"\nI0605 01:08:31.018732       1 pv_controller_base.go:504] deletion of claim \"volumemode-9417/pvc-2wcz6\" was already processed\nI0605 01:08:31.329651       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-922/pod-1d062a7f-c5ac-448f-aecb-4b418da5a0cb uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-ktlws pvc- persistent-local-volumes-test-922  6ab1d27f-7132-4484-9bff-13a8dbb24814 29605 0 2021-06-05 01:08:27 +0000 UTC 2021-06-05 01:08:31 +0000 UTC 0xc0031f0f08 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:08:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:08:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvnp9vm,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-922,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:31.329750       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-922/pvc-ktlws because it is still being used\nI0605 01:08:31.810595       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:32.158790       1 event.go:291] \"Event occurred\" object=\"resourcequota-4359/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"FailedBinding\" message=\"no persistent volumes available for this claim and no storage class is set\"\nI0605 01:08:32.550419       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-8938\nI0605 01:08:33.090065       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-583/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.125.201).\nI0605 01:08:33.623981       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-922/pod-1d062a7f-c5ac-448f-aecb-4b418da5a0cb uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-ktlws pvc- persistent-local-volumes-test-922  6ab1d27f-7132-4484-9bff-13a8dbb24814 29605 0 2021-06-05 01:08:27 +0000 UTC 2021-06-05 01:08:31 +0000 UTC 0xc0031f0f08 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:08:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:08:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvnp9vm,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-922,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:33.624097       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-922/pvc-ktlws because it is still being used\nI0605 01:08:33.664887       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4557/pvc-4snm5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4557\\\" or manually created by system administrator\"\nI0605 01:08:33.696043       1 pv_controller.go:864] volume \"pvc-fd4a079a-3b44-4f45-bed8-5d9ff47bc2cc\" entered phase \"Bound\"\nI0605 01:08:33.696074       1 pv_controller.go:967] volume \"pvc-fd4a079a-3b44-4f45-bed8-5d9ff47bc2cc\" bound to claim \"csi-mock-volumes-4557/pvc-4snm5\"\nI0605 01:08:33.701811       1 pv_controller.go:808] claim \"csi-mock-volumes-4557/pvc-4snm5\" entered phase \"Bound\"\nI0605 01:08:33.739138       1 aws.go:2291] Waiting for volume \"vol-0e7161a3f600b4800\" state: actual=detaching, desired=detached\nE0605 01:08:33.823469       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:33.885900       1 pv_controller.go:915] claim \"volume-2205/pvc-qg48m\" bound to volume \"local-czz9f\"\nI0605 01:08:33.890146       1 pv_controller.go:1326] isVolumeReleased[pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52]: volume is released\nI0605 01:08:33.893193       1 pv_controller.go:864] volume \"local-czz9f\" entered phase \"Bound\"\nI0605 01:08:33.893223       1 pv_controller.go:967] volume \"local-czz9f\" bound to claim \"volume-2205/pvc-qg48m\"\nI0605 01:08:33.899432       1 pv_controller.go:808] claim \"volume-2205/pvc-qg48m\" entered phase \"Bound\"\nI0605 01:08:33.899737       1 pv_controller.go:915] claim \"volume-5135/pvc-k64n9\" bound to volume \"aws-zlg7b\"\nI0605 01:08:33.899869       1 event.go:291] \"Event occurred\" object=\"resourcequota-4359/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"FailedBinding\" message=\"no persistent volumes available for this claim and no storage class is set\"\nI0605 01:08:33.907183       1 pv_controller.go:864] volume \"aws-zlg7b\" entered phase \"Bound\"\nI0605 01:08:33.907208       1 pv_controller.go:967] volume \"aws-zlg7b\" bound to claim \"volume-5135/pvc-k64n9\"\nI0605 01:08:33.915699       1 pv_controller.go:808] claim \"volume-5135/pvc-k64n9\" entered phase \"Bound\"\nI0605 01:08:33.916046       1 event.go:291] \"Event occurred\" object=\"volume-expand-8876/awssrcc7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0605 01:08:33.928164       1 pvc_protection_controller.go:291] PVC volumemode-5078/pvc-wtjst is unused\nI0605 01:08:33.935793       1 pv_controller.go:638] volume \"local-87ss4\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:33.939126       1 pv_controller.go:864] volume \"local-87ss4\" entered phase \"Released\"\nI0605 01:08:33.986416       1 pv_controller_base.go:504] deletion of claim \"volumemode-5078/pvc-wtjst\" was already processed\nI0605 01:08:34.077959       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://us-west-1a/vol-032a6d0de0c59f381\nI0605 01:08:34.078005       1 pv_controller.go:1421] volume \"pvc-faa93fb5-d6cf-4dd6-aa18-540004a2df52\" deleted\nI0605 01:08:34.087232       1 pv_controller_base.go:504] deletion of claim \"provisioning-4992/aws4c5km\" was already processed\nI0605 01:08:34.093557       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-583/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.125.201).\nI0605 01:08:34.263578       1 event.go:291] \"Event occurred\" object=\"resourcequota-4359/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"FailedBinding\" message=\"no persistent volumes available for this claim and no storage class is set\"\nI0605 01:08:34.266270       1 pvc_protection_controller.go:291] PVC resourcequota-4359/test-claim is unused\nE0605 01:08:34.385321       1 tokens_controller.go:262] error synchronizing serviceaccount container-runtime-3315/default: secrets \"default-token-ltksc\" is forbidden: unable to create new content in namespace container-runtime-3315 because it is being terminated\nE0605 01:08:34.507646       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-584/default: secrets \"default-token-wp2d8\" is forbidden: unable to create new content in namespace webhook-584 because it is being terminated\nI0605 01:08:34.528031       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-583/e2e-test-webhook-vxcq6\" objectUID=7fa008d5-14c1-4a4b-9f5a-db04da3661f4 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:34.531854       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-583/e2e-test-webhook-vxcq6\" objectUID=7fa008d5-14c1-4a4b-9f5a-db04da3661f4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:34.625740       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-583/sample-webhook-deployment-6bd9446d55\" objectUID=50463f86-cc99-4927-beba-7ab6ad880d4c kind=\"ReplicaSet\" virtual=false\nI0605 01:08:34.625872       1 deployment_controller.go:581] Deployment webhook-583/sample-webhook-deployment has been deleted\nI0605 01:08:34.630194       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-583/sample-webhook-deployment-6bd9446d55\" objectUID=50463f86-cc99-4927-beba-7ab6ad880d4c kind=\"ReplicaSet\" propagationPolicy=Background\nI0605 01:08:34.635222       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-583/sample-webhook-deployment-6bd9446d55-rmzb6\" objectUID=2615d750-ae61-4c32-b031-446db392c7cf kind=\"Pod\" virtual=false\nI0605 01:08:34.639418       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-583/sample-webhook-deployment-6bd9446d55-rmzb6\" objectUID=2615d750-ae61-4c32-b031-446db392c7cf kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:34.667228       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") from node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:34.712669       1 aws.go:2014] Assigned mount device cr -> volume vol-054c9800bc3642524\nI0605 01:08:34.934037       1 namespace_controller.go:185] Namespace has been deleted nettest-7781\nI0605 01:08:34.947328       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7514/awstn5nv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0605 01:08:35.054081       1 aws.go:2427] AttachVolume volume=\"vol-054c9800bc3642524\" instance=\"i-04b8aeda8cac6552a\" request returned {\n  AttachTime: 2021-06-05 01:08:35.042 +0000 UTC,\n  Device: \"/dev/xvdcr\",\n  InstanceId: \"i-04b8aeda8cac6552a\",\n  State: \"attaching\",\n  VolumeId: \"vol-054c9800bc3642524\"\n}\nI0605 01:08:35.070089       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-zlg7b\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0bb8498db2ff6cfb2\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:35.124436       1 aws.go:2014] Assigned mount device cb -> volume vol-0bb8498db2ff6cfb2\nI0605 01:08:35.309736       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-5320/test-quota\nE0605 01:08:35.350369       1 pv_controller.go:1437] error finding provisioning plugin for claim volumemode-2352/pvc-qdvdf: storageclass.storage.k8s.io \"volumemode-2352\" not found\nI0605 01:08:35.350412       1 event.go:291] \"Event occurred\" object=\"volumemode-2352/pvc-qdvdf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-2352\\\" not found\"\nI0605 01:08:35.388547       1 event.go:291] \"Event occurred\" object=\"provisioning-4258/nfsp65lg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-4258\\\" or manually created by system administrator\"\nI0605 01:08:35.388759       1 event.go:291] \"Event occurred\" object=\"provisioning-4258/nfsp65lg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-4258\\\" or manually created by system administrator\"\nI0605 01:08:35.406505       1 pv_controller.go:864] volume \"local-s62kz\" entered phase \"Available\"\nI0605 01:08:35.488893       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nW0605 01:08:35.489110       1 utils.go:323] Service services-9896/service-headless-toggled using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0605 01:08:35.492106       1 aws.go:2427] AttachVolume volume=\"vol-0bb8498db2ff6cfb2\" instance=\"i-0001a4645880ec32d\" request returned {\n  AttachTime: 2021-06-05 01:08:35.478 +0000 UTC,\n  Device: \"/dev/xvdcb\",\n  InstanceId: \"i-0001a4645880ec32d\",\n  State: \"attaching\",\n  VolumeId: \"vol-0bb8498db2ff6cfb2\"\n}\nI0605 01:08:35.502098       1 pvc_protection_controller.go:291] PVC volume-expand-5848/csi-hostpathn7cs4 is unused\nI0605 01:08:35.508744       1 pv_controller.go:638] volume \"pvc-b6b7356a-75e3-425f-8c51-fcb5bc8b9a5e\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:35.511926       1 pv_controller.go:864] volume \"pvc-b6b7356a-75e3-425f-8c51-fcb5bc8b9a5e\" entered phase \"Released\"\nI0605 01:08:35.516466       1 pv_controller.go:1326] isVolumeReleased[pvc-b6b7356a-75e3-425f-8c51-fcb5bc8b9a5e]: volume is released\nI0605 01:08:35.536199       1 pv_controller_base.go:504] deletion of claim \"volume-expand-5848/csi-hostpathn7cs4\" was already processed\nE0605 01:08:35.599226       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-1856/default: secrets \"default-token-4mh7f\" is forbidden: unable to create new content in namespace container-probe-1856 because it is being terminated\nE0605 01:08:35.613689       1 tokens_controller.go:262] error synchronizing serviceaccount port-forwarding-3616/default: secrets \"default-token-bdmdn\" is forbidden: unable to create new content in namespace port-forwarding-3616 because it is being terminated\nI0605 01:08:35.630062       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-922/pod-1d062a7f-c5ac-448f-aecb-4b418da5a0cb uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-ktlws pvc- persistent-local-volumes-test-922  6ab1d27f-7132-4484-9bff-13a8dbb24814 29605 0 2021-06-05 01:08:27 +0000 UTC 2021-06-05 01:08:31 +0000 UTC 0xc0031f0f08 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-06-05 01:08:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-06-05 01:08:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvnp9vm,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-922,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0605 01:08:35.630144       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-922/pvc-ktlws because it is still being used\nI0605 01:08:35.638385       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-922/pvc-ktlws is unused\nI0605 01:08:35.647665       1 pv_controller.go:638] volume \"local-pvnp9vm\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:35.651458       1 pv_controller.go:864] volume \"local-pvnp9vm\" entered phase \"Released\"\nI0605 01:08:35.657643       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-922/pvc-ktlws\" was already processed\nI0605 01:08:35.821276       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-06-05 01:08:19 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbd\",\n  InstanceId: \"i-0001a4645880ec32d\",\n  State: \"detaching\",\n  VolumeId: \"vol-0e7161a3f600b4800\"\n}\nI0605 01:08:35.821327       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-hmc78\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0e7161a3f600b4800\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:36.044067       1 namespace_controller.go:185] Namespace has been deleted kubectl-2355\nI0605 01:08:36.729087       1 namespace_controller.go:185] Namespace has been deleted provisioning-3282\nI0605 01:08:36.988766       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-4496/pvc-mw7dc is unused\nI0605 01:08:37.003030       1 pv_controller.go:638] volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:37.007114       1 pv_controller.go:864] volume \"pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d\" entered phase \"Released\"\nI0605 01:08:37.013309       1 pv_controller.go:1326] isVolumeReleased[pvc-4eef492d-1bc4-4fe5-99db-6a747b29975d]: volume is released\nI0605 01:08:37.042143       1 pv_controller_base.go:504] deletion of claim \"csi-mock-volumes-4496/pvc-mw7dc\" was already processed\nE0605 01:08:37.055234       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-922/default: secrets \"default-token-kspxz\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-922 because it is being terminated\nE0605 01:08:37.126216       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:37.165663       1 aws.go:2037] Releasing in-process attachment entry: cr -> volume vol-054c9800bc3642524\nI0605 01:08:37.165716       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") from node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:37.165951       1 event.go:291] \"Event occurred\" object=\"volume-5319/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-volume-0\\\" \"\nI0605 01:08:37.252640       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.235.249).\nI0605 01:08:37.314585       1 event.go:291] \"Event occurred\" object=\"volume-1868-2863/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0605 01:08:37.314793       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.235.249).\nI0605 01:08:37.417198       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.88.219).\nI0605 01:08:37.479279       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.88.219).\nI0605 01:08:37.480086       1 event.go:291] \"Event occurred\" object=\"volume-1868-2863/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0605 01:08:37.509339       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-58859d6d44\" objectUID=94b8a0a5-ad78-40ff-8fa3-e5a1a0ff0fed kind=\"ControllerRevision\" virtual=false\nI0605 01:08:37.509620       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-8411-3900/csi-mockplugin\nI0605 01:08:37.509659       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-0\" objectUID=b35ed876-9ec3-4eca-8662-918cb0892905 kind=\"Pod\" virtual=false\nI0605 01:08:37.515842       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-0\" objectUID=b35ed876-9ec3-4eca-8662-918cb0892905 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:37.518287       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-58859d6d44\" objectUID=94b8a0a5-ad78-40ff-8fa3-e5a1a0ff0fed kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:37.541307       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.112.159).\nI0605 01:08:37.600821       1 event.go:291] \"Event occurred\" object=\"volume-1868-2863/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0605 01:08:37.601085       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.112.159).\nI0605 01:08:37.601681       1 aws.go:2037] Releasing in-process attachment entry: cb -> volume vol-0bb8498db2ff6cfb2\nI0605 01:08:37.601719       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"aws-zlg7b\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0bb8498db2ff6cfb2\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:08:37.601770       1 event.go:291] \"Event occurred\" object=\"volume-5135/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-zlg7b\\\" \"\nI0605 01:08:37.617608       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-attacher-0\" objectUID=c0389be3-d120-4417-9b70-fc622afc7758 kind=\"Pod\" virtual=false\nI0605 01:08:37.617903       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-8411-3900/csi-mockplugin-attacher\nI0605 01:08:37.617952       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-attacher-74bf6694cd\" objectUID=63dc9e1f-5335-45c7-82d8-1551cef75fff kind=\"ControllerRevision\" virtual=false\nI0605 01:08:37.622082       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-attacher-74bf6694cd\" objectUID=63dc9e1f-5335-45c7-82d8-1551cef75fff kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:37.622365       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8411-3900/csi-mockplugin-attacher-0\" objectUID=c0389be3-d120-4417-9b70-fc622afc7758 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:37.649759       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.195.78).\nI0605 01:08:37.712835       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.195.78).\nI0605 01:08:37.713560       1 event.go:291] \"Event occurred\" object=\"volume-1868-2863/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0605 01:08:37.761902       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.113.202).\nI0605 01:08:37.822628       1 event.go:291] \"Event occurred\" object=\"volume-1868-2863/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0605 01:08:37.822971       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.113.202).\nI0605 01:08:37.887308       1 pv_controller.go:864] volume \"pvc-78cda4e1-8eb4-491a-a815-42de35e77cc3\" entered phase \"Bound\"\nI0605 01:08:37.887338       1 pv_controller.go:967] volume \"pvc-78cda4e1-8eb4-491a-a815-42de35e77cc3\" bound to claim \"provisioning-4258/nfsp65lg\"\nI0605 01:08:37.894623       1 pv_controller.go:808] claim \"provisioning-4258/nfsp65lg\" entered phase \"Bound\"\nE0605 01:08:37.933036       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:37.979135       1 event.go:291] \"Event occurred\" object=\"volume-1868/csi-hostpathm4q7r\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-1868\\\" or manually created by system administrator\"\nI0605 01:08:37.979161       1 event.go:291] \"Event occurred\" object=\"volume-1868/csi-hostpathm4q7r\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-1868\\\" or manually created by system administrator\"\nI0605 01:08:38.043924       1 namespace_controller.go:185] Namespace has been deleted kubectl-9147\nI0605 01:08:38.256298       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.235.249).\nI0605 01:08:38.426092       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.88.219).\nI0605 01:08:38.545029       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.112.159).\nI0605 01:08:38.765466       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.113.202).\nE0605 01:08:39.532908       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-583/default: secrets \"default-token-5mb45\" is forbidden: unable to create new content in namespace webhook-583 because it is being terminated\nI0605 01:08:39.544127       1 namespace_controller.go:185] Namespace has been deleted container-runtime-3315\nI0605 01:08:39.615135       1 namespace_controller.go:185] Namespace has been deleted webhook-584\nI0605 01:08:39.628846       1 namespace_controller.go:185] Namespace has been deleted webhook-584-markers\nI0605 01:08:39.711576       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8636^7d20a48d-c59a-11eb-80fa-5ed3c4c0dd76\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:39.719510       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8636^7d20a48d-c59a-11eb-80fa-5ed3c4c0dd76\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:39.744549       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8636^7d20a48d-c59a-11eb-80fa-5ed3c4c0dd76\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:39.926820       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8411\nI0605 01:08:40.386220       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.235.249).\nI0605 01:08:40.394133       1 pv_controller.go:864] volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" entered phase \"Bound\"\nI0605 01:08:40.394165       1 pv_controller.go:967] volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" bound to claim \"volume-1868/csi-hostpathm4q7r\"\nI0605 01:08:40.404816       1 pv_controller.go:808] claim \"volume-1868/csi-hostpathm4q7r\" entered phase \"Bound\"\nI0605 01:08:40.428401       1 aws_util.go:113] Successfully created EBS Disk volume aws://us-west-1a/vol-075d1b709a4d09719\nI0605 01:08:40.434182       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.195.78).\nI0605 01:08:40.453448       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.112.159).\nI0605 01:08:40.466109       1 utils.go:413] couldn't find ipfamilies for headless service: services-9896/service-headless-toggled. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.138.162).\nI0605 01:08:40.471441       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.113.202).\nI0605 01:08:40.486699       1 pv_controller.go:1652] volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" provisioned for claim \"fsgroupchangepolicy-7514/awstn5nv\"\nI0605 01:08:40.487377       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7514/awstn5nv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-e93901f9-58af-4a1a-b781-c87943427d76 using kubernetes.io/aws-ebs\"\nI0605 01:08:40.498659       1 pv_controller.go:864] volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" entered phase \"Bound\"\nI0605 01:08:40.498696       1 pv_controller.go:967] volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" bound to claim \"fsgroupchangepolicy-7514/awstn5nv\"\nI0605 01:08:40.509904       1 pv_controller.go:808] claim \"fsgroupchangepolicy-7514/awstn5nv\" entered phase \"Bound\"\nE0605 01:08:40.556783       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-5320/default: secrets \"default-token-bssxx\" is forbidden: unable to create new content in namespace resourcequota-5320 because it is being terminated\nE0605 01:08:40.765943       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-5499/pvc-q27s6: storageclass.storage.k8s.io \"provisioning-5499\" not found\nI0605 01:08:40.766562       1 event.go:291] \"Event occurred\" object=\"provisioning-5499/pvc-q27s6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5499\\\" not found\"\nI0605 01:08:40.770895       1 namespace_controller.go:185] Namespace has been deleted container-probe-1856\nI0605 01:08:40.824187       1 pv_controller.go:864] volume \"local-pcrpn\" entered phase \"Available\"\nI0605 01:08:41.114578       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") from node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:41.161315       1 aws.go:2014] Assigned mount device bz -> volume vol-075d1b709a4d09719\nE0605 01:08:41.312471       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4992/default: secrets \"default-token-wfmlf\" is forbidden: unable to create new content in namespace provisioning-4992 because it is being terminated\nE0605 01:08:41.316922       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:41.400370       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.235.249).\nI0605 01:08:41.507428       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-4359/test-quota\nI0605 01:08:41.523653       1 aws.go:2427] AttachVolume volume=\"vol-075d1b709a4d09719\" instance=\"i-04b8aeda8cac6552a\" request returned {\n  AttachTime: 2021-06-05 01:08:41.508 +0000 UTC,\n  Device: \"/dev/xvdbz\",\n  InstanceId: \"i-04b8aeda8cac6552a\",\n  State: \"attaching\",\n  VolumeId: \"vol-075d1b709a4d09719\"\n}\nI0605 01:08:41.895674       1 event.go:291] \"Event occurred\" object=\"job-4086/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-hwzkz\"\nI0605 01:08:41.901045       1 event.go:291] \"Event occurred\" object=\"job-4086/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-lb4hl\"\nI0605 01:08:42.137257       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.88.219).\nI0605 01:08:42.171047       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-922\nI0605 01:08:42.316871       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1868^906c9bc3-c59a-11eb-940f-ca7b4124bf7b\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:42.331038       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1868^906c9bc3-c59a-11eb-940f-ca7b4124bf7b\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:08:42.331161       1 event.go:291] \"Event occurred\" object=\"volume-1868/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\\\" \"\nE0605 01:08:42.615765       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:42.887365       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-attacher-4scfk\" objectUID=24cd0d27-93bd-4de8-9e0c-304130205a4a kind=\"EndpointSlice\" virtual=false\nI0605 01:08:42.890239       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-attacher-4scfk\" objectUID=24cd0d27-93bd-4de8-9e0c-304130205a4a kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:42.950232       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-attacher-5f6dddf6c7\" objectUID=371dc27f-5a5a-46d1-b1d5-48a7be8f2ee6 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:42.950484       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5848-5503/csi-hostpath-attacher\nI0605 01:08:42.950522       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-attacher-0\" objectUID=0010a831-0533-4bbe-9421-eea67f73c642 kind=\"Pod\" virtual=false\nI0605 01:08:42.952735       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-attacher-5f6dddf6c7\" objectUID=371dc27f-5a5a-46d1-b1d5-48a7be8f2ee6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:42.952959       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-attacher-0\" objectUID=0010a831-0533-4bbe-9421-eea67f73c642 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:43.056974       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpathplugin-ch27c\" objectUID=9f0f503b-bc64-4bd8-bf26-cff568750ea0 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:43.061286       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpathplugin-ch27c\" objectUID=9f0f503b-bc64-4bd8-bf26-cff568750ea0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:43.130614       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpathplugin-bb69fd69f\" objectUID=c3c8ee0c-b99c-4a37-9f75-87294c02ce8c kind=\"ControllerRevision\" virtual=false\nI0605 01:08:43.132166       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5848-5503/csi-hostpathplugin\nI0605 01:08:43.132223       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpathplugin-0\" objectUID=6e68f6ac-5e71-45fb-842b-f7de2440cbd1 kind=\"Pod\" virtual=false\nI0605 01:08:43.133960       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpathplugin-bb69fd69f\" objectUID=c3c8ee0c-b99c-4a37-9f75-87294c02ce8c kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:43.134295       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpathplugin-0\" objectUID=6e68f6ac-5e71-45fb-842b-f7de2440cbd1 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:43.144254       1 utils.go:413] couldn't find ipfamilies for headless service: volume-1868-2863/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.88.219).\nI0605 01:08:43.183607       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-provisioner-96mqw\" objectUID=8eb74e89-09ef-4b2c-a42f-16d944c69f4c kind=\"EndpointSlice\" virtual=false\nI0605 01:08:43.187040       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-provisioner-96mqw\" objectUID=8eb74e89-09ef-4b2c-a42f-16d944c69f4c kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:43.248884       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-provisioner-0\" objectUID=fb0fea48-89fd-4582-a1a3-d3e17b0dc763 kind=\"Pod\" virtual=false\nI0605 01:08:43.249166       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5848-5503/csi-hostpath-provisioner\nI0605 01:08:43.249212       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-provisioner-587895b6b5\" objectUID=38496ac6-d83f-4947-8ad1-6a7810f16ee5 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:43.251953       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-provisioner-587895b6b5\" objectUID=38496ac6-d83f-4947-8ad1-6a7810f16ee5 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:43.252239       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-provisioner-0\" objectUID=fb0fea48-89fd-4582-a1a3-d3e17b0dc763 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:43.305419       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-resizer-gl7pd\" objectUID=76e86301-2866-4326-94df-89c490af1ebe kind=\"EndpointSlice\" virtual=false\nI0605 01:08:43.310748       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-resizer-gl7pd\" objectUID=76e86301-2866-4326-94df-89c490af1ebe kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:43.368425       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-resizer-6bb86b6b\" objectUID=3176e38d-13ef-4499-b8cd-7fece11fafc1 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:43.368680       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5848-5503/csi-hostpath-resizer\nI0605 01:08:43.368721       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-resizer-0\" objectUID=065fd0a2-ea0e-4238-8785-6ec2b8553519 kind=\"Pod\" virtual=false\nI0605 01:08:43.372077       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-resizer-6bb86b6b\" objectUID=3176e38d-13ef-4499-b8cd-7fece11fafc1 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:43.372299       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-resizer-0\" objectUID=065fd0a2-ea0e-4238-8785-6ec2b8553519 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:43.424138       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-snapshotter-4kh86\" objectUID=0cf1a791-40e5-4ebc-a13b-631a4f5c5883 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:43.431393       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-snapshotter-4kh86\" objectUID=0cf1a791-40e5-4ebc-a13b-631a4f5c5883 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:43.489011       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-snapshotter-66455cf4d4\" objectUID=90901e60-94db-422b-8f56-33432edf2c44 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:43.489276       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-5848-5503/csi-hostpath-snapshotter\nI0605 01:08:43.489317       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-5848-5503/csi-hostpath-snapshotter-0\" objectUID=ab390319-2cc1-410b-9ff2-1947f95b0ada kind=\"Pod\" virtual=false\nI0605 01:08:43.491111       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-snapshotter-0\" objectUID=ab390319-2cc1-410b-9ff2-1947f95b0ada kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:43.491327       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-5848-5503/csi-hostpath-snapshotter-66455cf4d4\" objectUID=90901e60-94db-422b-8f56-33432edf2c44 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:43.631667       1 aws.go:2291] Waiting for volume \"vol-075d1b709a4d09719\" state: actual=attaching, desired=attached\nI0605 01:08:44.192458       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-4557/pvc-4snm5 is unused\nI0605 01:08:44.199383       1 pv_controller.go:638] volume \"pvc-fd4a079a-3b44-4f45-bed8-5d9ff47bc2cc\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:44.207387       1 pv_controller.go:864] volume \"pvc-fd4a079a-3b44-4f45-bed8-5d9ff47bc2cc\" entered phase \"Released\"\nI0605 01:08:44.216570       1 pv_controller.go:1326] isVolumeReleased[pvc-fd4a079a-3b44-4f45-bed8-5d9ff47bc2cc]: volume is released\nI0605 01:08:44.229480       1 pv_controller_base.go:504] deletion of claim \"csi-mock-volumes-4557/pvc-4snm5\" was already processed\nI0605 01:08:44.263894       1 pvc_protection_controller.go:291] PVC provisioning-4258/nfsp65lg is unused\nI0605 01:08:44.264430       1 event.go:291] \"Event occurred\" object=\"job-4086/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-vtjmm\"\nE0605 01:08:44.276247       1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key \"job-4086/fail-once-non-local\"\nI0605 01:08:44.280819       1 pv_controller.go:638] volume \"pvc-78cda4e1-8eb4-491a-a815-42de35e77cc3\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:44.284788       1 pv_controller.go:864] volume \"pvc-78cda4e1-8eb4-491a-a815-42de35e77cc3\" entered phase \"Released\"\nI0605 01:08:44.287757       1 pv_controller.go:1326] isVolumeReleased[pvc-78cda4e1-8eb4-491a-a815-42de35e77cc3]: volume is released\nI0605 01:08:44.298285       1 pv_controller_base.go:504] deletion of claim \"provisioning-4258/nfsp65lg\" was already processed\nI0605 01:08:44.482499       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-6844/httpd\" objectUID=70f59b4f-ae89-4725-ab69-e3418ae50062 kind=\"CiliumEndpoint\" virtual=false\nI0605 01:08:44.484622       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-6844/httpd\" objectUID=70f59b4f-ae89-4725-ab69-e3418ae50062 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0605 01:08:44.657593       1 namespace_controller.go:185] Namespace has been deleted webhook-583\nI0605 01:08:44.732654       1 namespace_controller.go:185] Namespace has been deleted webhook-583-markers\nI0605 01:08:44.848070       1 namespace_controller.go:185] Namespace has been deleted volumemode-5078\nI0605 01:08:45.448274       1 event.go:291] \"Event occurred\" object=\"job-4086/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-v725w\"\nI0605 01:08:45.617304       1 namespace_controller.go:185] Namespace has been deleted resourcequota-5320\nI0605 01:08:45.755392       1 aws.go:2037] Releasing in-process attachment entry: bz -> volume vol-075d1b709a4d09719\nI0605 01:08:45.755445       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") from node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:45.755719       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7514/pod-33920d2c-0453-4863-8ab4-972379e2f28b\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e93901f9-58af-4a1a-b781-c87943427d76\\\" \"\nI0605 01:08:45.833572       1 namespace_controller.go:185] Namespace has been deleted volume-expand-5848\nI0605 01:08:45.887912       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-3616\nI0605 01:08:46.362786       1 namespace_controller.go:185] Namespace has been deleted provisioning-4992\nI0605 01:08:46.531067       1 namespace_controller.go:185] Namespace has been deleted resourcequota-4359\nE0605 01:08:46.576819       1 tokens_controller.go:262] error synchronizing serviceaccount dns-6273/default: secrets \"default-token-gs4dq\" is forbidden: unable to create new content in namespace dns-6273 because it is being terminated\nI0605 01:08:46.858333       1 namespace_controller.go:185] Namespace has been deleted volumemode-9417\nI0605 01:08:47.000501       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-5479764c7\" objectUID=e8129a6f-1284-426e-8d47-3baada8cb021 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:47.000793       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-4496-6498/csi-mockplugin\nI0605 01:08:47.000838       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-0\" objectUID=8c8540a2-d755-4565-8033-6ad079934852 kind=\"Pod\" virtual=false\nI0605 01:08:47.003255       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-0\" objectUID=8c8540a2-d755-4565-8033-6ad079934852 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:47.003536       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-5479764c7\" objectUID=e8129a6f-1284-426e-8d47-3baada8cb021 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:47.018891       1 pvc_protection_controller.go:291] PVC provisioning-8636/csi-hostpathhnl6b is unused\nI0605 01:08:47.039401       1 pv_controller.go:638] volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:08:47.042552       1 pv_controller.go:864] volume \"pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37\" entered phase \"Released\"\nI0605 01:08:47.051624       1 pv_controller.go:1326] isVolumeReleased[pvc-1a8b0c32-ce7a-425a-b577-2e51a0900d37]: volume is released\nI0605 01:08:47.065898       1 pv_controller_base.go:504] deletion of claim \"provisioning-8636/csi-hostpathhnl6b\" was already processed\nI0605 01:08:47.109295       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-attacher-77ccbf466b\" objectUID=56f00468-0399-4068-9b60-76354ad72ba3 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:47.109581       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-4496-6498/csi-mockplugin-attacher\nI0605 01:08:47.109624       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-attacher-0\" objectUID=7633adb1-3eca-4703-ad4d-232b8bc7e4c6 kind=\"Pod\" virtual=false\nI0605 01:08:47.111275       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-attacher-77ccbf466b\" objectUID=56f00468-0399-4068-9b60-76354ad72ba3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:47.111553       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4496-6498/csi-mockplugin-attacher-0\" objectUID=7633adb1-3eca-4703-ad4d-232b8bc7e4c6 kind=\"Pod\" propagationPolicy=Background\nE0605 01:08:47.262255       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-7252/default: secrets \"default-token-94qgz\" is forbidden: unable to create new content in namespace container-probe-7252 because it is being terminated\nI0605 01:08:48.675410       1 event.go:291] \"Event occurred\" object=\"job-4086/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-5ljdz\"\nI0605 01:08:48.885891       1 pv_controller.go:915] claim \"volumemode-2352/pvc-qdvdf\" bound to volume \"local-s62kz\"\nI0605 01:08:48.892046       1 pv_controller.go:864] volume \"local-s62kz\" entered phase \"Bound\"\nI0605 01:08:48.892073       1 pv_controller.go:967] volume \"local-s62kz\" bound to claim \"volumemode-2352/pvc-qdvdf\"\nI0605 01:08:48.897484       1 pv_controller.go:808] claim \"volumemode-2352/pvc-qdvdf\" entered phase \"Bound\"\nI0605 01:08:48.897664       1 pv_controller.go:915] claim \"provisioning-5499/pvc-q27s6\" bound to volume \"local-pcrpn\"\nI0605 01:08:48.903419       1 pv_controller.go:864] volume \"local-pcrpn\" entered phase \"Bound\"\nI0605 01:08:48.903445       1 pv_controller.go:967] volume \"local-pcrpn\" bound to claim \"provisioning-5499/pvc-q27s6\"\nI0605 01:08:48.908476       1 pv_controller.go:808] claim \"provisioning-5499/pvc-q27s6\" entered phase \"Bound\"\nI0605 01:08:48.908749       1 event.go:291] \"Event occurred\" object=\"volume-expand-8876/awssrcc7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0605 01:08:49.371888       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4496\nE0605 01:08:49.616763       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:51.667182       1 namespace_controller.go:185] Namespace has been deleted dns-6273\nI0605 01:08:52.149810       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4557-3599/csi-mockplugin-68969875c\" objectUID=71c2ccd4-da74-4693-ade0-3be6acc91690 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:52.150047       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-4557-3599/csi-mockplugin\nI0605 01:08:52.150075       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4557-3599/csi-mockplugin-0\" objectUID=7d8662c5-84d5-4bd5-85ef-1cbaafb36aaa kind=\"Pod\" virtual=false\nE0605 01:08:52.342547       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8636/default: secrets \"default-token-wjtfl\" is forbidden: unable to create new content in namespace provisioning-8636 because it is being terminated\nI0605 01:08:52.710015       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4557-3599/csi-mockplugin-0\" objectUID=7d8662c5-84d5-4bd5-85ef-1cbaafb36aaa kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:52.710275       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4557-3599/csi-mockplugin-68969875c\" objectUID=71c2ccd4-da74-4693-ade0-3be6acc91690 kind=\"ControllerRevision\" propagationPolicy=Background\nE0605 01:08:52.986923       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4934/default: secrets \"default-token-xn7t2\" is forbidden: unable to create new content in namespace provisioning-4934 because it is being terminated\nI0605 01:08:53.014529       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8411-3900\nI0605 01:08:54.121245       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-5c84db5954 to 2\"\nI0605 01:08:54.121393       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-705/httpd-deployment-5c84db5954\" need=2 creating=2\nI0605 01:08:54.129162       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-705/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:08:54.135498       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment-5c84db5954\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-5c84db5954-p7slx\"\nI0605 01:08:54.140101       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment-5c84db5954\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-5c84db5954-qkw2w\"\nE0605 01:08:54.375505       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:08:54.379536       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-attacher-zkzhw\" objectUID=928bc791-cfd8-4471-b2a1-ddaa958de9ea kind=\"EndpointSlice\" virtual=false\nI0605 01:08:54.388607       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-attacher-zkzhw\" objectUID=928bc791-cfd8-4471-b2a1-ddaa958de9ea kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:54.449053       1 stateful_set.go:419] StatefulSet has been deleted provisioning-8636-5609/csi-hostpath-attacher\nI0605 01:08:54.449076       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-attacher-76d9849878\" objectUID=14a2e18c-3592-4a74-b516-52af96894fbf kind=\"ControllerRevision\" virtual=false\nI0605 01:08:54.449093       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-attacher-0\" objectUID=07cd9c60-1015-4811-b8dc-ea1998062fb7 kind=\"Pod\" virtual=false\nI0605 01:08:54.451052       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-attacher-76d9849878\" objectUID=14a2e18c-3592-4a74-b516-52af96894fbf kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:54.451052       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-attacher-0\" objectUID=07cd9c60-1015-4811-b8dc-ea1998062fb7 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:54.559293       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpathplugin-z4b6r\" objectUID=ca3a114e-04d5-4b56-8c68-e3e19227352a kind=\"EndpointSlice\" virtual=false\nI0605 01:08:54.561765       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpathplugin-z4b6r\" objectUID=ca3a114e-04d5-4b56-8c68-e3e19227352a kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:54.625969       1 stateful_set.go:419] StatefulSet has been deleted provisioning-8636-5609/csi-hostpathplugin\nI0605 01:08:54.626006       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpathplugin-696f57554b\" objectUID=3bf46bfd-61c7-4041-9369-2db54c97d3d8 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:54.626031       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpathplugin-0\" objectUID=03f56e52-42e7-4cec-9817-b6c7fd52a8c8 kind=\"Pod\" virtual=false\nI0605 01:08:54.628294       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpathplugin-0\" objectUID=03f56e52-42e7-4cec-9817-b6c7fd52a8c8 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:54.628574       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpathplugin-696f57554b\" objectUID=3bf46bfd-61c7-4041-9369-2db54c97d3d8 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:54.639350       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4557\nI0605 01:08:54.679482       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-provisioner-z4wjx\" objectUID=af8287a4-6b0f-4c5e-a593-d09f31892c0a kind=\"EndpointSlice\" virtual=false\nI0605 01:08:54.683456       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-provisioner-z4wjx\" objectUID=af8287a4-6b0f-4c5e-a593-d09f31892c0a kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:54.744472       1 stateful_set.go:419] StatefulSet has been deleted provisioning-8636-5609/csi-hostpath-provisioner\nI0605 01:08:54.744476       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-provisioner-797fb85556\" objectUID=7128c800-e42d-4c59-bdc4-4f809954a4e9 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:54.744506       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-provisioner-0\" objectUID=74485542-8f56-4e50-a028-c32d601f38b9 kind=\"Pod\" virtual=false\nI0605 01:08:54.749670       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-provisioner-797fb85556\" objectUID=7128c800-e42d-4c59-bdc4-4f809954a4e9 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:54.752095       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-provisioner-0\" objectUID=74485542-8f56-4e50-a028-c32d601f38b9 kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:54.798161       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-resizer-6d7dz\" objectUID=58b675e4-215e-4162-a43e-2889f62a9e7e kind=\"EndpointSlice\" virtual=false\nI0605 01:08:54.801046       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-resizer-6d7dz\" objectUID=58b675e4-215e-4162-a43e-2889f62a9e7e kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:54.859637       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-resizer-6ffdd7fcdc\" objectUID=7a349c77-e06c-4c96-9097-e05cba55df64 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:54.859918       1 stateful_set.go:419] StatefulSet has been deleted provisioning-8636-5609/csi-hostpath-resizer\nI0605 01:08:54.859962       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-resizer-0\" objectUID=84eac687-e6be-43c7-be96-5201228bb48a kind=\"Pod\" virtual=false\nI0605 01:08:54.862498       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-resizer-0\" objectUID=84eac687-e6be-43c7-be96-5201228bb48a kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:54.862713       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-resizer-6ffdd7fcdc\" objectUID=7a349c77-e06c-4c96-9097-e05cba55df64 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:54.913915       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-snapshotter-6ks54\" objectUID=a3f376a1-6075-4d95-b271-bc895ab83c98 kind=\"EndpointSlice\" virtual=false\nI0605 01:08:54.916484       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-snapshotter-6ks54\" objectUID=a3f376a1-6075-4d95-b271-bc895ab83c98 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:08:54.983219       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-snapshotter-5465b4f79f\" objectUID=f39fc420-9f4b-45e1-b09d-cff86b778c58 kind=\"ControllerRevision\" virtual=false\nI0605 01:08:54.983456       1 stateful_set.go:419] StatefulSet has been deleted provisioning-8636-5609/csi-hostpath-snapshotter\nI0605 01:08:54.983494       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-8636-5609/csi-hostpath-snapshotter-0\" objectUID=9317f140-723f-4ff3-9e9b-6a8e78e776bc kind=\"Pod\" virtual=false\nI0605 01:08:54.985284       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-snapshotter-0\" objectUID=9317f140-723f-4ff3-9e9b-6a8e78e776bc kind=\"Pod\" propagationPolicy=Background\nI0605 01:08:54.985499       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-8636-5609/csi-hostpath-snapshotter-5465b4f79f\" objectUID=f39fc420-9f4b-45e1-b09d-cff86b778c58 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:08:55.193620       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-705/httpd-deployment-5c84db5954\" need=3 creating=1\nI0605 01:08:55.194290       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-5c84db5954 to 3\"\nI0605 01:08:55.200559       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment-5c84db5954\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-5c84db5954-sh4vd\"\nI0605 01:08:55.212022       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-705/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:08:55.220757       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-705/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:08:55.307631       1 namespace_controller.go:185] Namespace has been deleted kubectl-6844\nI0605 01:08:55.654186       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-705/httpd-deployment-86bff9b6d7\" need=1 creating=1\nI0605 01:08:55.654744       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-86bff9b6d7 to 1\"\nI0605 01:08:55.661165       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment-86bff9b6d7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-86bff9b6d7-2tsc9\"\nI0605 01:08:55.683454       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-705/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:08:55.898881       1 pvc_protection_controller.go:291] PVC provisioning-5499/pvc-q27s6 is unused\nI0605 01:08:55.908003       1 pv_controller.go:638] volume \"local-pcrpn\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:08:55.910668       1 pv_controller.go:864] volume \"local-pcrpn\" entered phase \"Released\"\nI0605 01:08:55.953976       1 pv_controller_base.go:504] deletion of claim \"provisioning-5499/pvc-q27s6\" was already processed\nI0605 01:08:56.267922       1 event.go:291] \"Event occurred\" object=\"job-4086/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0605 01:08:56.576124       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:56.577963       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:08:57.249662       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set httpd-deployment-5c84db5954 to 2\"\nI0605 01:08:57.249812       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"kubectl-705/httpd-deployment-5c84db5954\" need=2 deleting=1\nI0605 01:08:57.249835       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"kubectl-705/httpd-deployment-5c84db5954\" relatedReplicaSets=[httpd-deployment-5c84db5954 httpd-deployment-86bff9b6d7]\nI0605 01:08:57.249887       1 controller_utils.go:604] \"Deleting pod\" controller=\"httpd-deployment-5c84db5954\" pod=\"kubectl-705/httpd-deployment-5c84db5954-sh4vd\"\nE0605 01:08:57.251388       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"httpd-deployment.16858a54ba97de12\", GenerateName:\"\", Namespace:\"kubectl-705\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kubectl-705\", Name:\"httpd-deployment\", UID:\"3d67b05d-3f9a-4969-84c0-f48f9a7c289b\", APIVersion:\"apps/v1\", ResourceVersion:\"30796\", FieldPath:\"\"}, Reason:\"ScalingReplicaSet\", Message:\"Scaled down replica set httpd-deployment-5c84db5954 to 2\", Source:v1.EventSource{Component:\"deployment-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc026d18a4edc8412, ext:882181863175, loc:(*time.Location)(0x6f9a440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc026d18a4edc8412, ext:882181863175, loc:(*time.Location)(0x6f9a440)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"httpd-deployment.16858a54ba97de12\" is forbidden: unable to create new content in namespace kubectl-705 because it is being terminated' (will not retry!)\nI0605 01:08:57.255757       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"kubectl-705/httpd-deployment-86bff9b6d7\" need=2 creating=1\nI0605 01:08:57.256382       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-86bff9b6d7 to 2\"\nE0605 01:08:57.263301       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"httpd-deployment.16858a54bafe8e49\", GenerateName:\"\", Namespace:\"kubectl-705\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kubectl-705\", Name:\"httpd-deployment\", UID:\"3d67b05d-3f9a-4969-84c0-f48f9a7c289b\", APIVersion:\"apps/v1\", ResourceVersion:\"30849\", FieldPath:\"\"}, Reason:\"ScalingReplicaSet\", Message:\"Scaled up replica set httpd-deployment-86bff9b6d7 to 2\", Source:v1.EventSource{Component:\"deployment-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc026d18a4f433449, ext:882188592964, loc:(*time.Location)(0x6f9a440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc026d18a4f433449, ext:882188592964, loc:(*time.Location)(0x6f9a440)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"httpd-deployment.16858a54bafe8e49\" is forbidden: unable to create new content in namespace kubectl-705 because it is being terminated' (will not retry!)\nI0605 01:08:57.263707       1 event.go:291] \"Event occurred\" object=\"kubectl-705/httpd-deployment-5c84db5954\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: httpd-deployment-5c84db5954-sh4vd\"\nE0605 01:08:57.267109       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"httpd-deployment-5c84db5954.16858a54bb6ea946\", GenerateName:\"\", Namespace:\"kubectl-705\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-705\", Name:\"httpd-deployment-5c84db5954\", UID:\"30e5a030-f429-4969-a087-ae4191c749d1\", APIVersion:\"apps/v1\", ResourceVersion:\"30848\", FieldPath:\"\"}, Reason:\"SuccessfulDelete\", Message:\"Deleted pod: httpd-deployment-5c84db5954-sh4vd\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc026d18a4fb34f46, ext:882195939905, loc:(*time.Location)(0x6f9a440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc026d18a4fb34f46, ext:882195939905, loc:(*time.Location)(0x6f9a440)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"httpd-deployment-5c84db5954.16858a54bb6ea946\" is forbidden: unable to create new content in namespace kubectl-705 because it is being terminated' (will not retry!)\nE0605 01:08:57.308151       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-4557-3599/default: secrets \"default-token-kcwr2\" is forbidden: unable to create new content in namespace csi-mock-volumes-4557-3599 because it is being terminated\nI0605 01:08:57.421376       1 namespace_controller.go:185] Namespace has been deleted provisioning-8636\nI0605 01:08:57.446672       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4496-6498\nI0605 01:08:57.555660       1 namespace_controller.go:185] Namespace has been deleted container-probe-7252\nI0605 01:08:57.590837       1 namespace_controller.go:185] Namespace has been deleted dns-2585\nI0605 01:08:58.004532       1 namespace_controller.go:185] Namespace has been deleted provisioning-4934\nI0605 01:08:59.316050       1 event.go:291] \"Event occurred\" object=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-7d6697c5b7 to 1\"\nI0605 01:08:59.316242       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment-7d6697c5b7\" need=1 creating=1\nI0605 01:08:59.331178       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:08:59.331938       1 event.go:291] \"Event occurred\" object=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment-7d6697c5b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-7d6697c5b7-h2ndd\"\nI0605 01:08:59.348224       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0605 01:08:59.761889       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4258/default: secrets \"default-token-rzhzf\" is forbidden: unable to create new content in namespace provisioning-4258 because it is being terminated\nI0605 01:09:00.194079       1 event.go:291] \"Event occurred\" object=\"volume-expand-8876/awssrcc7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0605 01:09:00.196453       1 pvc_protection_controller.go:291] PVC volume-expand-8876/awssrcc7 is unused\nI0605 01:09:00.234641       1 pvc_protection_controller.go:291] PVC volume-2205/pvc-qg48m is unused\nI0605 01:09:00.243463       1 pv_controller.go:638] volume \"local-czz9f\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:00.246735       1 pv_controller.go:864] volume \"local-czz9f\" entered phase \"Released\"\nI0605 01:09:00.292403       1 pv_controller_base.go:504] deletion of claim \"volume-2205/pvc-qg48m\" was already processed\nI0605 01:09:00.987387       1 garbagecollector.go:471] \"Processing object\" object=\"endpointslicemirroring-4189/example-custom-endpoints-cj69k\" objectUID=f6f2ce97-3c38-404d-a646-856b6ea62e04 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:00.992802       1 garbagecollector.go:580] \"Deleting object\" object=\"endpointslicemirroring-4189/example-custom-endpoints-cj69k\" objectUID=f6f2ce97-3c38-404d-a646-856b6ea62e04 kind=\"EndpointSlice\" propagationPolicy=Background\nE0605 01:09:01.001255       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"example-custom-endpoints-cj69k\", UID:\"f6f2ce97-3c38-404d-a646-856b6ea62e04\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"endpointslicemirroring-4189\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Endpoints\", Name:\"example-custom-endpoints\", UID:\"82716800-5b13-4fc5-9fcb-eb19126e6848\", Controller:(*bool)(0xc00327869c), BlockOwnerDeletion:(*bool)(0xc00327869d)}}}: endpointslices.discovery.k8s.io \"example-custom-endpoints-cj69k\" not found\nI0605 01:09:01.007459       1 garbagecollector.go:471] \"Processing object\" object=\"endpointslicemirroring-4189/example-custom-endpoints-cj69k\" objectUID=f6f2ce97-3c38-404d-a646-856b6ea62e04 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:01.044418       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-705/httpd-deployment-86bff9b6d7\" objectUID=28cb2d13-91fa-4eb2-a24c-5e3bbf2d971f kind=\"ReplicaSet\" virtual=false\nI0605 01:09:01.044655       1 deployment_controller.go:581] Deployment kubectl-705/httpd-deployment has been deleted\nI0605 01:09:01.044697       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-705/httpd-deployment-5c84db5954\" objectUID=30e5a030-f429-4969-a087-ae4191c749d1 kind=\"ReplicaSet\" virtual=false\nI0605 01:09:01.047890       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-705/httpd-deployment-86bff9b6d7\" objectUID=28cb2d13-91fa-4eb2-a24c-5e3bbf2d971f kind=\"ReplicaSet\" propagationPolicy=Background\nI0605 01:09:01.048113       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-705/httpd-deployment-5c84db5954\" objectUID=30e5a030-f429-4969-a087-ae4191c749d1 kind=\"ReplicaSet\" propagationPolicy=Background\nI0605 01:09:01.060189       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-705/httpd-deployment-5c84db5954-qkw2w\" objectUID=7534188d-9194-4ea4-a99d-2cf1df7dfc08 kind=\"Pod\" virtual=false\nI0605 01:09:01.060437       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-705/httpd-deployment-5c84db5954-sh4vd\" objectUID=e47b6282-d616-4384-ac67-055848142671 kind=\"Pod\" virtual=false\nI0605 01:09:01.060459       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-705/httpd-deployment-5c84db5954-p7slx\" objectUID=270e6a8c-eb5b-484c-86e0-3c6e4b274fe0 kind=\"Pod\" virtual=false\nI0605 01:09:01.060624       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-705/httpd-deployment-86bff9b6d7-2tsc9\" objectUID=3133c40a-0362-4d95-816d-f31530c1b397 kind=\"Pod\" virtual=false\nI0605 01:09:01.064855       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-705/httpd-deployment-5c84db5954-qkw2w\" objectUID=7534188d-9194-4ea4-a99d-2cf1df7dfc08 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:01.065185       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-705/httpd-deployment-5c84db5954-p7slx\" objectUID=270e6a8c-eb5b-484c-86e0-3c6e4b274fe0 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:01.065366       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-705/httpd-deployment-86bff9b6d7-2tsc9\" objectUID=3133c40a-0362-4d95-816d-f31530c1b397 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:01.166917       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-9896/service-headless\" need=3 creating=1\nI0605 01:09:01.191871       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-9896/service-headless-toggled\" need=3 creating=1\nI0605 01:09:01.229518       1 garbagecollector.go:471] \"Processing object\" object=\"services-9896/service-headless-k6h9b\" objectUID=d37fd0bc-c3dc-40bc-8c8c-0023fceb14de kind=\"Pod\" virtual=false\nI0605 01:09:01.229544       1 garbagecollector.go:471] \"Processing object\" object=\"services-9896/service-headless-hjsrc\" objectUID=24ff5945-7979-45c3-9006-dcc4832e84bc kind=\"Pod\" virtual=false\nI0605 01:09:01.229559       1 garbagecollector.go:471] \"Processing object\" object=\"services-9896/service-headless-g2lrv\" objectUID=09884826-3c9c-4ca9-a916-607d552f9c14 kind=\"Pod\" virtual=false\nI0605 01:09:01.240562       1 garbagecollector.go:471] \"Processing object\" object=\"services-9896/service-headless-toggled-rhrmv\" objectUID=8feb0df4-407c-4d16-9abb-fceee075abe9 kind=\"Pod\" virtual=false\nI0605 01:09:01.240593       1 garbagecollector.go:471] \"Processing object\" object=\"services-9896/service-headless-toggled-w4vsx\" objectUID=9abdcaaf-2284-43f6-8bef-321c4b3458ff kind=\"Pod\" virtual=false\nI0605 01:09:01.240608       1 garbagecollector.go:471] \"Processing object\" object=\"services-9896/service-headless-toggled-pq7xq\" objectUID=ee760436-67d7-43a9-9190-8297e54f77c1 kind=\"Pod\" virtual=false\nE0605 01:09:01.241767       1 replica_set.go:532] sync \"services-9896/service-headless-toggled\" failed with Operation cannot be fulfilled on replicationcontrollers \"service-headless-toggled\": StorageError: invalid object, Code: 4, Key: /registry/controllers/services-9896/service-headless-toggled, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d57b770f-eaf9-45b1-865d-b367662a2de6, UID in object meta: \nE0605 01:09:01.247691       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:01.378765       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:01.483037       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:01.635387       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:01.814722       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:02.000003       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nI0605 01:09:02.075132       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-054c9800bc3642524\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nE0605 01:09:02.252188       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:02.680483       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:02.994669       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5499/default: secrets \"default-token-zmghb\" is forbidden: unable to create new content in namespace provisioning-5499 because it is being terminated\nI0605 01:09:03.056446       1 namespace_controller.go:185] Namespace has been deleted volume-provisioning-2552\nE0605 01:09:03.460178       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nE0605 01:09:03.982185       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-2127/pvc-kt6vm: storageclass.storage.k8s.io \"provisioning-2127\" not found\nI0605 01:09:03.982232       1 event.go:291] \"Event occurred\" object=\"provisioning-2127/pvc-kt6vm\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2127\\\" not found\"\nI0605 01:09:04.035677       1 pv_controller.go:864] volume \"local-pnvw8\" entered phase \"Available\"\nI0605 01:09:04.079262       1 namespace_controller.go:185] Namespace has been deleted volume-expand-5848-5503\nE0605 01:09:04.851408       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nI0605 01:09:04.890535       1 namespace_controller.go:185] Namespace has been deleted provisioning-4258\nE0605 01:09:05.425804       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-8876/default: secrets \"default-token-n4vxl\" is forbidden: unable to create new content in namespace volume-expand-8876 because it is being terminated\nI0605 01:09:05.580300       1 utils.go:413] couldn't find ipfamilies for headless service: crd-webhook-5678/e2e-test-crd-conversion-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.116.60).\nE0605 01:09:05.631043       1 tokens_controller.go:262] error synchronizing serviceaccount projected-6652/default: secrets \"default-token-ht8ct\" is forbidden: unable to create new content in namespace projected-6652 because it is being terminated\nE0605 01:09:05.955502       1 tokens_controller.go:262] error synchronizing serviceaccount volume-2205/default: secrets \"default-token-xxclg\" is forbidden: unable to create new content in namespace volume-2205 because it is being terminated\nI0605 01:09:06.629324       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:09:06.633134       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:09:06.717319       1 namespace_controller.go:185] Namespace has been deleted endpointslice-4562\nE0605 01:09:07.614978       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nI0605 01:09:07.735651       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-5678/e2e-test-crd-conversion-webhook-tkllq\" objectUID=e1a80d55-b2cb-4a37-a94f-46ecde7daae9 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:07.744640       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-5678/e2e-test-crd-conversion-webhook-tkllq\" objectUID=e1a80d55-b2cb-4a37-a94f-46ecde7daae9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:07.808687       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment-7d6697c5b7\" objectUID=de90acb9-1abe-4de6-9bc5-487780515baf kind=\"ReplicaSet\" virtual=false\nI0605 01:09:07.808942       1 deployment_controller.go:581] Deployment crd-webhook-5678/sample-crd-conversion-webhook-deployment has been deleted\nI0605 01:09:07.810891       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment-7d6697c5b7\" objectUID=de90acb9-1abe-4de6-9bc5-487780515baf kind=\"ReplicaSet\" propagationPolicy=Background\nI0605 01:09:07.815497       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment-7d6697c5b7-h2ndd\" objectUID=40dfbda8-a955-4f03-92c5-f1df5d2a08f1 kind=\"Pod\" virtual=false\nI0605 01:09:07.817345       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-5678/sample-crd-conversion-webhook-deployment-7d6697c5b7-h2ndd\" objectUID=40dfbda8-a955-4f03-92c5-f1df5d2a08f1 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:07.983725       1 pvc_protection_controller.go:291] PVC volumemode-2352/pvc-qdvdf is unused\nI0605 01:09:08.013402       1 pv_controller.go:638] volume \"local-s62kz\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:08.021885       1 pv_controller.go:864] volume \"local-s62kz\" entered phase \"Released\"\nI0605 01:09:08.088019       1 pv_controller_base.go:504] deletion of claim \"volumemode-2352/pvc-qdvdf\" was already processed\nI0605 01:09:08.157543       1 namespace_controller.go:185] Namespace has been deleted provisioning-5499\nI0605 01:09:08.407076       1 namespace_controller.go:185] Namespace has been deleted job-4086\nI0605 01:09:09.648112       1 event.go:291] \"Event occurred\" object=\"pv-2894/pvc-vvsd2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"FailedBinding\" message=\"no persistent volumes available for this claim and no storage class is set\"\nI0605 01:09:09.711746       1 pv_controller.go:864] volume \"nfs-2rxz8\" entered phase \"Available\"\nI0605 01:09:09.882585       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.229.169).\nI0605 01:09:09.946766       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.229.169).\nI0605 01:09:09.947515       1 event.go:291] \"Event occurred\" object=\"volume-expand-6947-4650/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0605 01:09:10.044558       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.187.36).\nI0605 01:09:10.109901       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.187.36).\nI0605 01:09:10.110644       1 event.go:291] \"Event occurred\" object=\"volume-expand-6947-4650/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0605 01:09:10.168527       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.72.189).\nI0605 01:09:10.243337       1 event.go:291] \"Event occurred\" object=\"volume-expand-6947-4650/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0605 01:09:10.243604       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.72.189).\nI0605 01:09:10.298556       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.188.191).\nI0605 01:09:10.363890       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.188.191).\nI0605 01:09:10.364687       1 event.go:291] \"Event occurred\" object=\"volume-expand-6947-4650/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0605 01:09:10.460222       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.89.1).\nE0605 01:09:10.486311       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:09:10.605823       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.89.1).\nI0605 01:09:10.606207       1 event.go:291] \"Event occurred\" object=\"volume-expand-6947-4650/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0605 01:09:10.606249       1 namespace_controller.go:185] Namespace has been deleted topology-1301\nI0605 01:09:10.616064       1 namespace_controller.go:185] Namespace has been deleted volume-expand-8876\nI0605 01:09:10.706103       1 namespace_controller.go:185] Namespace has been deleted projected-6652\nI0605 01:09:10.768638       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.222.108).\nI0605 01:09:10.837675       1 event.go:291] \"Event occurred\" object=\"volume-expand-6947/csi-hostpathfpj5m\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-6947\\\" or manually created by system administrator\"\nI0605 01:09:10.838826       1 event.go:291] \"Event occurred\" object=\"volume-expand-6947/csi-hostpathfpj5m\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-6947\\\" or manually created by system administrator\"\nI0605 01:09:10.890791       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.229.169).\nI0605 01:09:10.897463       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.222.108).\nI0605 01:09:10.899945       1 event.go:291] \"Event occurred\" object=\"provisioning-5504-9930/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0605 01:09:10.914216       1 namespace_controller.go:185] Namespace has been deleted crictl-8920\nI0605 01:09:10.982296       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.176.213).\nI0605 01:09:11.047082       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.187.36).\nI0605 01:09:11.067095       1 namespace_controller.go:185] Namespace has been deleted volume-2205\nI0605 01:09:11.067618       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.176.213).\nI0605 01:09:11.068567       1 event.go:291] \"Event occurred\" object=\"provisioning-5504-9930/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0605 01:09:11.118613       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.80.254).\nI0605 01:09:11.190833       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.80.254).\nI0605 01:09:11.191767       1 event.go:291] \"Event occurred\" object=\"provisioning-5504-9930/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0605 01:09:11.256630       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.63.59).\nI0605 01:09:11.292457       1 namespace_controller.go:185] Namespace has been deleted endpointslicemirroring-4189\nI0605 01:09:11.302439       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.188.191).\nI0605 01:09:11.349444       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.63.59).\nI0605 01:09:11.354032       1 event.go:291] \"Event occurred\" object=\"provisioning-5504-9930/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0605 01:09:11.401903       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.58.221).\nI0605 01:09:11.432759       1 namespace_controller.go:185] Namespace has been deleted kubectl-705\nI0605 01:09:11.465253       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.58.221).\nI0605 01:09:11.467330       1 event.go:291] \"Event occurred\" object=\"provisioning-5504-9930/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0605 01:09:11.482453       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.89.1).\nE0605 01:09:11.527762       1 pv_controller.go:1437] error finding provisioning plugin for claim volumemode-2861/pvc-ng8rh: storageclass.storage.k8s.io \"volumemode-2861\" not found\nI0605 01:09:11.528963       1 event.go:291] \"Event occurred\" object=\"volumemode-2861/pvc-ng8rh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-2861\\\" not found\"\nI0605 01:09:11.582758       1 pv_controller.go:864] volume \"local-dj2dz\" entered phase \"Available\"\nI0605 01:09:11.644808       1 event.go:291] \"Event occurred\" object=\"provisioning-5504/csi-hostpathcmf2f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5504\\\" or manually created by system administrator\"\nI0605 01:09:11.989869       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.176.213).\nI0605 01:09:11.999203       1 event.go:291] \"Event occurred\" object=\"webhook-7678/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-6bd9446d55 to 1\"\nI0605 01:09:11.999385       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-7678/sample-webhook-deployment-6bd9446d55\" need=1 creating=1\nI0605 01:09:12.008294       1 event.go:291] \"Event occurred\" object=\"webhook-7678/sample-webhook-deployment-6bd9446d55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-6bd9446d55-fc7ll\"\nI0605 01:09:12.011302       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-7678/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:09:12.057308       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nE0605 01:09:12.558381       1 tokens_controller.go:262] error synchronizing serviceaccount port-forwarding-9973/default: secrets \"default-token-rgqll\" is forbidden: unable to create new content in namespace port-forwarding-9973 because it is being terminated\nI0605 01:09:12.771647       1 pv_controller.go:864] volume \"pvc-38acfff7-4394-47ba-9e75-e9e2ca686cf7\" entered phase \"Bound\"\nI0605 01:09:12.771682       1 pv_controller.go:967] volume \"pvc-38acfff7-4394-47ba-9e75-e9e2ca686cf7\" bound to claim \"volume-expand-6947/csi-hostpathfpj5m\"\nI0605 01:09:12.793777       1 pv_controller.go:808] claim \"volume-expand-6947/csi-hostpathfpj5m\" entered phase \"Bound\"\nE0605 01:09:12.806652       1 namespace_controller.go:162] deletion of namespace services-9896 failed: unexpected items still remain in namespace: services-9896 for gvr: /v1, Resource=pods\nI0605 01:09:13.371634       1 garbagecollector.go:471] \"Processing object\" object=\"services-7137/nodeport-update-service-rk6v5\" objectUID=2939e4a8-17c0-4b68-bbc9-5850da44a971 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:13.380808       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7137/nodeport-update-service-rk6v5\" objectUID=2939e4a8-17c0-4b68-bbc9-5850da44a971 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:13.485525       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.69.89.1).\nI0605 01:09:13.518354       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.72.189).\nI0605 01:09:13.560911       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.187.36).\nI0605 01:09:13.638197       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-2290\nI0605 01:09:13.679387       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.64.188.191).\nE0605 01:09:13.701209       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-3759/default: secrets \"default-token-5cxms\" is forbidden: unable to create new content in namespace resourcequota-3759 because it is being terminated\nI0605 01:09:13.787749       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-3759/test-quota\nE0605 01:09:13.811288       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-4340/default: secrets \"default-token-b5csz\" is forbidden: unable to create new content in namespace volumemode-4340 because it is being terminated\nI0605 01:09:13.853237       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.80.254).\nI0605 01:09:13.935460       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.222.108).\nI0605 01:09:14.468085       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.229.169).\nI0605 01:09:14.523417       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.72.189).\nI0605 01:09:14.612766       1 utils.go:413] couldn't find ipfamilies for headless service: volume-expand-6947-4650/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.187.36).\nI0605 01:09:14.859559       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.80.254).\nI0605 01:09:14.900112       1 pv_controller.go:864] volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" entered phase \"Bound\"\nI0605 01:09:14.900144       1 pv_controller.go:967] volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" bound to claim \"provisioning-5504/csi-hostpathcmf2f\"\nI0605 01:09:14.905946       1 pv_controller.go:808] claim \"provisioning-5504/csi-hostpathcmf2f\" entered phase \"Bound\"\nI0605 01:09:14.927413       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.58.221).\nI0605 01:09:14.941630       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.222.108).\nI0605 01:09:14.942048       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.63.59).\nI0605 01:09:15.008088       1 namespace_controller.go:185] Namespace has been deleted volume-5319\nI0605 01:09:15.133660       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.176.213).\nI0605 01:09:15.145592       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") from node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:09:15.229333       1 aws.go:2014] Assigned mount device by -> volume vol-075d1b709a4d09719\nE0605 01:09:15.245425       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-8307/default: secrets \"default-token-drfnf\" is forbidden: unable to create new content in namespace configmap-8307 because it is being terminated\nI0605 01:09:15.370360       1 pvc_protection_controller.go:291] PVC volume-5135/pvc-k64n9 is unused\nI0605 01:09:15.377367       1 pv_controller.go:638] volume \"aws-zlg7b\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:15.380309       1 pv_controller.go:864] volume \"aws-zlg7b\" entered phase \"Released\"\nE0605 01:09:15.400188       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:09:15.423641       1 pv_controller_base.go:504] deletion of claim \"volume-5135/pvc-k64n9\" was already processed\nI0605 01:09:15.558152       1 aws.go:2427] AttachVolume volume=\"vol-075d1b709a4d09719\" instance=\"i-04b8aeda8cac6552a\" request returned {\n  AttachTime: 2021-06-05 01:09:15.545 +0000 UTC,\n  Device: \"/dev/xvdby\",\n  InstanceId: \"i-04b8aeda8cac6552a\",\n  State: \"attaching\",\n  VolumeId: \"vol-075d1b709a4d09719\"\n}\nE0605 01:09:15.731440       1 tokens_controller.go:262] error synchronizing serviceaccount pod-network-test-8395/default: secrets \"default-token-hs6lf\" is forbidden: unable to create new content in namespace pod-network-test-8395 because it is being terminated\nI0605 01:09:15.933381       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.58.221).\nI0605 01:09:15.950756       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-5504-9930/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.63.59).\nI0605 01:09:16.047299       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5504^a500e8e6-c59a-11eb-bae7-de3cb5ba43de\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:09:16.061558       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5504^a500e8e6-c59a-11eb-bae7-de3cb5ba43de\") from node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:09:16.061789       1 event.go:291] \"Event occurred\" object=\"provisioning-5504/pod-subpath-test-dynamicpv-62vb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\\\" \"\nI0605 01:09:16.292765       1 namespace_controller.go:185] Namespace has been deleted provisioning-8636-5609\nE0605 01:09:16.703678       1 pv_controller.go:1437] error finding provisioning plugin for claim volume-7894/pvc-rdphj: storageclass.storage.k8s.io \"volume-7894\" not found\nI0605 01:09:16.704032       1 event.go:291] \"Event occurred\" object=\"volume-7894/pvc-rdphj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-7894\\\" not found\"\nI0605 01:09:16.764630       1 pv_controller.go:864] volume \"local-jtxb4\" entered phase \"Available\"\nE0605 01:09:17.163420       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0605 01:09:17.397031       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-2743/default: secrets \"default-token-t9xlm\" is forbidden: unable to create new content in namespace security-context-2743 because it is being terminated\nI0605 01:09:17.672552       1 aws.go:2037] Releasing in-process attachment entry: by -> volume vol-075d1b709a4d09719\nI0605 01:09:17.672598       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") from node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:09:17.672818       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-7514/pod-e8a1e2a4-7087-41bd-9705-d857ab5919e3\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e93901f9-58af-4a1a-b781-c87943427d76\\\" \"\nI0605 01:09:17.881467       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-5678\nI0605 01:09:17.986327       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7166/dns-test-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0605 01:09:18.032487       1 namespace_controller.go:185] Namespace has been deleted provisioning-5759\nI0605 01:09:18.042277       1 utils.go:413] couldn't find ipfamilies for headless service: dns-7166/test-service-2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.212.153).\nE0605 01:09:18.078652       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:09:18.099916       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7166/dns-test-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0605 01:09:18.100324       1 utils.go:413] couldn't find ipfamilies for headless service: dns-7166/test-service-2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.212.153).\nE0605 01:09:18.181835       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-3528/default: secrets \"default-token-4qr8m\" is forbidden: unable to create new content in namespace secrets-3528 because it is being terminated\nI0605 01:09:18.259789       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-7678/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.253.235).\nE0605 01:09:18.609852       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-5963/pvc-kccm9: storageclass.storage.k8s.io \"provisioning-5963\" not found\nI0605 01:09:18.610160       1 event.go:291] \"Event occurred\" object=\"provisioning-5963/pvc-kccm9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-5963\\\" not found\"\nI0605 01:09:18.670699       1 pv_controller.go:864] volume \"local-hvf8s\" entered phase \"Available\"\nE0605 01:09:18.676221       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-1312/default: secrets \"default-token-7wtkc\" is forbidden: unable to create new content in namespace replication-controller-1312 because it is being terminated\nI0605 01:09:18.886214       1 pv_controller.go:915] claim \"volumemode-2861/pvc-ng8rh\" bound to volume \"local-dj2dz\"\nI0605 01:09:18.892830       1 pv_controller.go:864] volume \"local-dj2dz\" entered phase \"Bound\"\nI0605 01:09:18.892860       1 pv_controller.go:967] volume \"local-dj2dz\" bound to claim \"volumemode-2861/pvc-ng8rh\"\nI0605 01:09:18.899961       1 pv_controller.go:808] claim \"volumemode-2861/pvc-ng8rh\" entered phase \"Bound\"\nI0605 01:09:18.900068       1 pv_controller.go:915] claim \"provisioning-2127/pvc-kt6vm\" bound to volume \"local-pnvw8\"\nI0605 01:09:18.905512       1 pv_controller.go:864] volume \"local-pnvw8\" entered phase \"Bound\"\nI0605 01:09:18.905535       1 pv_controller.go:967] volume \"local-pnvw8\" bound to claim \"provisioning-2127/pvc-kt6vm\"\nI0605 01:09:18.911312       1 pv_controller.go:808] claim \"provisioning-2127/pvc-kt6vm\" entered phase \"Bound\"\nI0605 01:09:18.911398       1 pv_controller.go:915] claim \"pv-2894/pvc-vvsd2\" bound to volume \"nfs-2rxz8\"\nI0605 01:09:18.916128       1 pv_controller.go:864] volume \"nfs-2rxz8\" entered phase \"Bound\"\nI0605 01:09:18.916163       1 pv_controller.go:967] volume \"nfs-2rxz8\" bound to claim \"pv-2894/pvc-vvsd2\"\nI0605 01:09:18.921157       1 pv_controller.go:808] claim \"pv-2894/pvc-vvsd2\" entered phase \"Bound\"\nI0605 01:09:18.921348       1 pv_controller.go:915] claim \"provisioning-5963/pvc-kccm9\" bound to volume \"local-hvf8s\"\nI0605 01:09:18.930604       1 pv_controller.go:864] volume \"local-hvf8s\" entered phase \"Bound\"\nI0605 01:09:18.930625       1 pv_controller.go:967] volume \"local-hvf8s\" bound to claim \"provisioning-5963/pvc-kccm9\"\nI0605 01:09:18.935488       1 pv_controller.go:808] claim \"provisioning-5963/pvc-kccm9\" entered phase \"Bound\"\nI0605 01:09:18.935580       1 pv_controller.go:915] claim \"volume-7894/pvc-rdphj\" bound to volume \"local-jtxb4\"\nI0605 01:09:18.940817       1 pv_controller.go:864] volume \"local-jtxb4\" entered phase \"Bound\"\nI0605 01:09:18.940840       1 pv_controller.go:967] volume \"local-jtxb4\" bound to claim \"volume-7894/pvc-rdphj\"\nI0605 01:09:18.944242       1 namespace_controller.go:185] Namespace has been deleted volumemode-4340\nI0605 01:09:18.945928       1 pv_controller.go:808] claim \"volume-7894/pvc-rdphj\" entered phase \"Bound\"\nI0605 01:09:18.963339       1 namespace_controller.go:185] Namespace has been deleted resourcequota-3759\nI0605 01:09:18.997199       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7166/dns-test-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0605 01:09:19.006524       1 namespace_controller.go:185] Namespace has been deleted services-9374\nI0605 01:09:19.180600       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-zlg7b\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0bb8498db2ff6cfb2\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:09:19.182379       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"aws-zlg7b\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0bb8498db2ff6cfb2\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:09:19.264182       1 utils.go:413] couldn't find ipfamilies for headless service: webhook-7678/e2e-test-webhook. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.253.235).\nI0605 01:09:19.698449       1 namespace_controller.go:185] Namespace has been deleted volumemode-2352\nI0605 01:09:20.289115       1 namespace_controller.go:185] Namespace has been deleted configmap-8307\nE0605 01:09:20.352851       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: webhook.example.com/v1: the server could not find the requested resource, webhook.example.com/v2: the server could not find the requested resource\nE0605 01:09:20.393296       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-9000/pvc-2j2pl: storageclass.storage.k8s.io \"provisioning-9000\" not found\nI0605 01:09:20.393561       1 event.go:291] \"Event occurred\" object=\"provisioning-9000/pvc-2j2pl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9000\\\" not found\"\nI0605 01:09:20.454644       1 pv_controller.go:864] volume \"local-vh2nz\" entered phase \"Available\"\nE0605 01:09:20.478321       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-7330/default: secrets \"default-token-rddrv\" is forbidden: unable to create new content in namespace nettest-7330 because it is being terminated\nI0605 01:09:20.720578       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7678/e2e-test-webhook-nb7zv\" objectUID=03be312b-02f7-4578-81ba-7f7f9ab1394d kind=\"EndpointSlice\" virtual=false\nI0605 01:09:20.857894       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7678/sample-webhook-deployment-6bd9446d55\" objectUID=4ed1d0f4-4afc-4ff5-b8d1-0d3c3b7f7cbb kind=\"ReplicaSet\" virtual=false\nI0605 01:09:20.857951       1 deployment_controller.go:581] Deployment webhook-7678/sample-webhook-deployment has been deleted\nI0605 01:09:21.330681       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7678/e2e-test-webhook-nb7zv\" objectUID=03be312b-02f7-4578-81ba-7f7f9ab1394d kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:21.330889       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7678/sample-webhook-deployment-6bd9446d55\" objectUID=4ed1d0f4-4afc-4ff5-b8d1-0d3c3b7f7cbb kind=\"ReplicaSet\" propagationPolicy=Background\nI0605 01:09:21.393373       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7678/sample-webhook-deployment-6bd9446d55-fc7ll\" objectUID=c7c6cc89-e32b-4f99-a616-fc9ecabd6a95 kind=\"Pod\" virtual=false\nI0605 01:09:21.400545       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7678/sample-webhook-deployment-6bd9446d55-fc7ll\" objectUID=c7c6cc89-e32b-4f99-a616-fc9ecabd6a95 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:22.115060       1 utils.go:424] couldn't find ipfamilies for headless service: dns-7166/dns-test-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0605 01:09:22.115732       1 utils.go:413] couldn't find ipfamilies for headless service: dns-7166/test-service-2. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.68.212.153).\nE0605 01:09:22.386252       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-9655/default: secrets \"default-token-2jcvn\" is forbidden: unable to create new content in namespace security-context-9655 because it is being terminated\nI0605 01:09:22.467052       1 namespace_controller.go:185] Namespace has been deleted ssh-2493\nI0605 01:09:22.501047       1 namespace_controller.go:185] Namespace has been deleted security-context-2743\nI0605 01:09:22.638380       1 event.go:291] \"Event occurred\" object=\"provisioning-4404/aws6hxb7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0605 01:09:22.751908       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.227.175).\nI0605 01:09:22.752944       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-9973\nI0605 01:09:22.820702       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.227.175).\nI0605 01:09:22.821491       1 event.go:291] \"Event occurred\" object=\"provisioning-2962-3992/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0605 01:09:22.925566       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.183.203).\nI0605 01:09:22.934637       1 namespace_controller.go:185] Namespace has been deleted services-9896\nI0605 01:09:23.002109       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.183.203).\nI0605 01:09:23.002533       1 event.go:291] \"Event occurred\" object=\"provisioning-2962-3992/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0605 01:09:23.052365       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.44.231).\nI0605 01:09:23.141302       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.44.231).\nI0605 01:09:23.142246       1 event.go:291] \"Event occurred\" object=\"provisioning-2962-3992/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0605 01:09:23.185934       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.117.57).\nI0605 01:09:23.259844       1 event.go:291] \"Event occurred\" object=\"provisioning-2962-3992/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0605 01:09:23.260698       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.117.57).\nI0605 01:09:23.300086       1 namespace_controller.go:185] Namespace has been deleted secrets-3528\nI0605 01:09:23.328699       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.75).\nE0605 01:09:23.410202       1 tokens_controller.go:262] error synchronizing serviceaccount services-7137/default: secrets \"default-token-bmf6n\" is forbidden: unable to create new content in namespace services-7137 because it is being terminated\nI0605 01:09:23.416319       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.75).\nI0605 01:09:23.416610       1 event.go:291] \"Event occurred\" object=\"provisioning-2962-3992/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0605 01:09:23.521070       1 namespace_controller.go:185] Namespace has been deleted downward-api-8612\nI0605 01:09:23.658384       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4557-3599\nI0605 01:09:23.674553       1 event.go:291] \"Event occurred\" object=\"provisioning-2962/csi-hostpathrfnf7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-2962\\\" or manually created by system administrator\"\nE0605 01:09:23.725479       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:09:23.763115       1 namespace_controller.go:185] Namespace has been deleted replication-controller-1312\nI0605 01:09:23.772137       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.227.175).\nI0605 01:09:23.818759       1 garbagecollector.go:471] \"Processing object\" object=\"services-7137/nodeport-update-service-p5974\" objectUID=c03c77f5-40fe-41dc-832c-15a2c1a9957a kind=\"Pod\" virtual=false\nI0605 01:09:23.819198       1 garbagecollector.go:471] \"Processing object\" object=\"services-7137/nodeport-update-service-x9glk\" objectUID=44230733-0141-43bf-9a28-c98503f49fa5 kind=\"Pod\" virtual=false\nI0605 01:09:23.852190       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7137/nodeport-update-service-p5974\" objectUID=c03c77f5-40fe-41dc-832c-15a2c1a9957a kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:23.852448       1 garbagecollector.go:580] \"Deleting object\" object=\"services-7137/nodeport-update-service-x9glk\" objectUID=44230733-0141-43bf-9a28-c98503f49fa5 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:23.938368       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.183.203).\nI0605 01:09:24.066459       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.44.231).\nI0605 01:09:24.277209       1 pv_controller.go:864] volume \"local-pv2t9b9\" entered phase \"Available\"\nI0605 01:09:24.326803       1 pv_controller.go:915] claim \"persistent-local-volumes-test-3226/pvc-lrzhz\" bound to volume \"local-pv2t9b9\"\nI0605 01:09:24.334128       1 pv_controller.go:864] volume \"local-pv2t9b9\" entered phase \"Bound\"\nI0605 01:09:24.334156       1 pv_controller.go:967] volume \"local-pv2t9b9\" bound to claim \"persistent-local-volumes-test-3226/pvc-lrzhz\"\nI0605 01:09:24.340111       1 pv_controller.go:808] claim \"persistent-local-volumes-test-3226/pvc-lrzhz\" entered phase \"Bound\"\nI0605 01:09:24.353773       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.75).\nI0605 01:09:24.579929       1 aws.go:2291] Waiting for volume \"vol-0bb8498db2ff6cfb2\" state: actual=detaching, desired=detached\nI0605 01:09:24.587737       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-3226/pvc-lrzhz is unused\nI0605 01:09:24.594514       1 pv_controller.go:638] volume \"local-pv2t9b9\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:24.597555       1 pv_controller.go:864] volume \"local-pv2t9b9\" entered phase \"Released\"\nI0605 01:09:24.643845       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-3226/pvc-lrzhz\" was already processed\nE0605 01:09:25.767208       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-7678/default: secrets \"default-token-8nd2t\" is forbidden: unable to create new content in namespace webhook-7678 because it is being terminated\nE0605 01:09:25.969770       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4233/default: secrets \"default-token-gvmtf\" is forbidden: unable to create new content in namespace provisioning-4233 because it is being terminated\nI0605 01:09:26.016506       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1176/test-cleanup-controller\" need=1 creating=1\nI0605 01:09:26.022066       1 event.go:291] \"Event occurred\" object=\"deployment-1176/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-controller-qjfbl\"\nI0605 01:09:26.046098       1 namespace_controller.go:185] Namespace has been deleted nettest-7330\nI0605 01:09:26.452032       1 pvc_protection_controller.go:291] PVC volume-1868/csi-hostpathm4q7r is unused\nI0605 01:09:26.458668       1 pv_controller.go:638] volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:09:26.463229       1 pv_controller.go:864] volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" entered phase \"Released\"\nI0605 01:09:26.464726       1 pv_controller.go:1326] isVolumeReleased[pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c]: volume is released\nI0605 01:09:26.484536       1 pv_controller_base.go:504] deletion of claim \"volume-1868/csi-hostpathm4q7r\" was already processed\nI0605 01:09:26.617584       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-8395\nW0605 01:09:26.656777       1 aws.go:2207] Waiting for volume \"vol-0bb8498db2ff6cfb2\" to be detached but the volume does not exist\nI0605 01:09:26.656809       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  State: \"detached\"\n}\nI0605 01:09:26.656842       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"aws-zlg7b\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0bb8498db2ff6cfb2\") on node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:09:26.683961       1 pvc_protection_controller.go:291] PVC provisioning-5504/csi-hostpathcmf2f is unused\nI0605 01:09:26.689360       1 pv_controller.go:638] volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:09:26.692700       1 pv_controller.go:864] volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" entered phase \"Released\"\nI0605 01:09:26.694966       1 pv_controller.go:1326] isVolumeReleased[pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5]: volume is released\nI0605 01:09:26.702203       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.227.175).\nI0605 01:09:26.720248       1 pv_controller_base.go:504] deletion of claim \"provisioning-5504/csi-hostpathcmf2f\" was already processed\nI0605 01:09:27.245490       1 pv_controller.go:864] volume \"pvc-bba3b33f-9516-4734-aaee-2156dc20c512\" entered phase \"Bound\"\nI0605 01:09:27.245522       1 pv_controller.go:967] volume \"pvc-bba3b33f-9516-4734-aaee-2156dc20c512\" bound to claim \"provisioning-2962/csi-hostpathrfnf7\"\nI0605 01:09:27.264557       1 pv_controller.go:808] claim \"provisioning-2962/csi-hostpathrfnf7\" entered phase \"Bound\"\nI0605 01:09:27.470450       1 namespace_controller.go:185] Namespace has been deleted security-context-9655\nI0605 01:09:27.508062       1 namespace_controller.go:185] Namespace has been deleted projected-9861\nI0605 01:09:27.711704       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-attacher. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.71.227.175).\nI0605 01:09:28.068543       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-bba3b33f-9516-4734-aaee-2156dc20c512\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-2962^ac5c0098-c59a-11eb-9ab6-feb83e3b2e74\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:09:28.076497       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-bba3b33f-9516-4734-aaee-2156dc20c512\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-2962^ac5c0098-c59a-11eb-9ab6-feb83e3b2e74\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:09:28.076770       1 event.go:291] \"Event occurred\" object=\"provisioning-2962/pod-subpath-test-dynamicpv-2chb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-bba3b33f-9516-4734-aaee-2156dc20c512\\\" \"\nI0605 01:09:28.199244       1 aws_util.go:113] Successfully created EBS Disk volume aws://us-west-1a/vol-0c627f6859b1e28d3\nI0605 01:09:28.237958       1 namespace_controller.go:185] Namespace has been deleted volume-4158\nI0605 01:09:28.248553       1 pv_controller.go:1652] volume \"pvc-032fc37d-b769-4496-a6e0-c7af9feef78f\" provisioned for claim \"provisioning-4404/aws6hxb7\"\nI0605 01:09:28.248752       1 event.go:291] \"Event occurred\" object=\"provisioning-4404/aws6hxb7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-032fc37d-b769-4496-a6e0-c7af9feef78f using kubernetes.io/aws-ebs\"\nI0605 01:09:28.253534       1 pv_controller.go:864] volume \"pvc-032fc37d-b769-4496-a6e0-c7af9feef78f\" entered phase \"Bound\"\nI0605 01:09:28.253564       1 pv_controller.go:967] volume \"pvc-032fc37d-b769-4496-a6e0-c7af9feef78f\" bound to claim \"provisioning-4404/aws6hxb7\"\nI0605 01:09:28.260614       1 pv_controller.go:808] claim \"provisioning-4404/aws6hxb7\" entered phase \"Bound\"\nI0605 01:09:28.779211       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-032fc37d-b769-4496-a6e0-c7af9feef78f\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0c627f6859b1e28d3\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:09:28.824051       1 aws.go:2014] Assigned mount device bp -> volume vol-0c627f6859b1e28d3\nI0605 01:09:28.872217       1 pvc_protection_controller.go:291] PVC pv-2894/pvc-vvsd2 is unused\nI0605 01:09:28.877906       1 pv_controller.go:638] volume \"nfs-2rxz8\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:28.880594       1 pv_controller.go:864] volume \"nfs-2rxz8\" entered phase \"Released\"\nI0605 01:09:29.056526       1 namespace_controller.go:185] Namespace has been deleted services-7137\nI0605 01:09:29.085167       1 pv_controller_base.go:504] deletion of claim \"pv-2894/pvc-vvsd2\" was already processed\nI0605 01:09:29.196786       1 aws.go:2427] AttachVolume volume=\"vol-0c627f6859b1e28d3\" instance=\"i-0001a4645880ec32d\" request returned {\n  AttachTime: 2021-06-05 01:09:29.184 +0000 UTC,\n  Device: \"/dev/xvdbp\",\n  InstanceId: \"i-0001a4645880ec32d\",\n  State: \"attaching\",\n  VolumeId: \"vol-0c627f6859b1e28d3\"\n}\nI0605 01:09:29.372951       1 pvc_protection_controller.go:291] PVC provisioning-2127/pvc-kt6vm is unused\nI0605 01:09:29.379518       1 pv_controller.go:638] volume \"local-pnvw8\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:29.382472       1 pv_controller.go:864] volume \"local-pnvw8\" entered phase \"Released\"\nI0605 01:09:29.427717       1 pv_controller_base.go:504] deletion of claim \"provisioning-2127/pvc-kt6vm\" was already processed\nI0605 01:09:29.790128       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5504^a500e8e6-c59a-11eb-bae7-de3cb5ba43de\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:09:29.795763       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5504^a500e8e6-c59a-11eb-bae7-de3cb5ba43de\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:09:29.799821       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1868^906c9bc3-c59a-11eb-940f-ca7b4124bf7b\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:09:29.801650       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1868^906c9bc3-c59a-11eb-940f-ca7b4124bf7b\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:09:29.805780       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-d9114c1b-f478-4530-9ad0-b22f80b28ed5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5504^a500e8e6-c59a-11eb-bae7-de3cb5ba43de\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nI0605 01:09:29.809529       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-97e3d409-2f9e-4ce3-accc-1d4b9dd7279c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-1868^906c9bc3-c59a-11eb-940f-ca7b4124bf7b\") on node \"ip-172-20-56-177.us-west-1.compute.internal\" \nE0605 01:09:30.333941       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-8126/default: secrets \"default-token-kpx2n\" is forbidden: unable to create new content in namespace resourcequota-8126 because it is being terminated\nI0605 01:09:30.381942       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-8126/test-quota\nI0605 01:09:30.872696       1 namespace_controller.go:185] Namespace has been deleted webhook-7678-markers\nI0605 01:09:30.887029       1 namespace_controller.go:185] Namespace has been deleted webhook-7678\nI0605 01:09:30.897227       1 namespace_controller.go:185] Namespace has been deleted volume-7020\nI0605 01:09:31.080258       1 namespace_controller.go:185] Namespace has been deleted provisioning-4233\nI0605 01:09:31.165797       1 garbagecollector.go:471] \"Processing object\" object=\"projected-3052/pod-projected-configmaps-741d26cd-7346-4fb1-8bef-d1092ce75ff6\" objectUID=3d78f2b4-5b3c-4ab3-9aed-daaafa1eb34a kind=\"CiliumEndpoint\" virtual=false\nI0605 01:09:31.169045       1 garbagecollector.go:580] \"Deleting object\" object=\"projected-3052/pod-projected-configmaps-741d26cd-7346-4fb1-8bef-d1092ce75ff6\" objectUID=3d78f2b4-5b3c-4ab3-9aed-daaafa1eb34a kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0605 01:09:31.185642       1 tokens_controller.go:262] error synchronizing serviceaccount projected-3052/default: secrets \"default-token-srkxq\" is forbidden: unable to create new content in namespace projected-3052 because it is being terminated\nI0605 01:09:31.305243       1 aws.go:2037] Releasing in-process attachment entry: bp -> volume vol-0c627f6859b1e28d3\nI0605 01:09:31.305306       1 operation_generator.go:360] AttachVolume.Attach succeeded for volume \"pvc-032fc37d-b769-4496-a6e0-c7af9feef78f\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-0c627f6859b1e28d3\") from node \"ip-172-20-63-110.us-west-1.compute.internal\" \nI0605 01:09:31.305470       1 event.go:291] \"Event occurred\" object=\"provisioning-4404/pod-subpath-test-dynamicpv-jflq\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-032fc37d-b769-4496-a6e0-c7af9feef78f\\\" \"\nI0605 01:09:32.344237       1 event.go:291] \"Event occurred\" object=\"deployment-1176/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-cleanup-deployment-685c4f8568 to 1\"\nI0605 01:09:32.344410       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1176/test-cleanup-deployment-685c4f8568\" need=1 creating=1\nI0605 01:09:32.352359       1 event.go:291] \"Event occurred\" object=\"deployment-1176/test-cleanup-deployment-685c4f8568\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-deployment-685c4f8568-9ng4w\"\nI0605 01:09:32.362146       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1176/test-cleanup-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-cleanup-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:09:32.372240       1 namespace_controller.go:185] Namespace has been deleted downward-api-9949\nI0605 01:09:32.929108       1 pv_controller.go:864] volume \"nfs-nwncc\" entered phase \"Available\"\nI0605 01:09:32.978498       1 pv_controller.go:915] claim \"pv-9623/pvc-c5bnn\" bound to volume \"nfs-nwncc\"\nI0605 01:09:32.985790       1 pv_controller.go:864] volume \"nfs-nwncc\" entered phase \"Bound\"\nI0605 01:09:32.985819       1 pv_controller.go:967] volume \"nfs-nwncc\" bound to claim \"pv-9623/pvc-c5bnn\"\nI0605 01:09:32.993391       1 pv_controller.go:808] claim \"pv-9623/pvc-c5bnn\" entered phase \"Bound\"\nI0605 01:09:33.301755       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.44.231).\nI0605 01:09:33.708786       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.183.203).\nI0605 01:09:33.817822       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-attacher-7x25d\" objectUID=35b1c1f1-05ab-44ca-999d-ff68ce7ba7df kind=\"EndpointSlice\" virtual=false\nI0605 01:09:33.823859       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-attacher-7x25d\" objectUID=35b1c1f1-05ab-44ca-999d-ff68ce7ba7df kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:33.886226       1 pv_controller.go:915] claim \"provisioning-9000/pvc-2j2pl\" bound to volume \"local-vh2nz\"\nI0605 01:09:33.893670       1 pv_controller.go:864] volume \"local-vh2nz\" entered phase \"Bound\"\nI0605 01:09:33.893700       1 pv_controller.go:967] volume \"local-vh2nz\" bound to claim \"provisioning-9000/pvc-2j2pl\"\nI0605 01:09:33.895625       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-attacher-5b54fdbb8f\" objectUID=1c3f08e6-fa0d-4ae0-9cab-b7d2f5c43cdd kind=\"ControllerRevision\" virtual=false\nI0605 01:09:33.895871       1 stateful_set.go:419] StatefulSet has been deleted volume-1868-2863/csi-hostpath-attacher\nI0605 01:09:33.895915       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-attacher-0\" objectUID=82575ef7-a3bc-4227-8d5e-71619f3c5238 kind=\"Pod\" virtual=false\nI0605 01:09:33.902185       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-attacher-0\" objectUID=82575ef7-a3bc-4227-8d5e-71619f3c5238 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:33.902453       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-attacher-5b54fdbb8f\" objectUID=1c3f08e6-fa0d-4ae0-9cab-b7d2f5c43cdd kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:33.902664       1 pv_controller.go:808] claim \"provisioning-9000/pvc-2j2pl\" entered phase \"Bound\"\nI0605 01:09:34.004362       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpathplugin-p9zv9\" objectUID=45cfc1f8-acc3-42be-9eb0-9bb10f549ca9 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.007535       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpathplugin-p9zv9\" objectUID=45cfc1f8-acc3-42be-9eb0-9bb10f549ca9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.026364       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-attacher-qvmrk\" objectUID=d1bd7efa-7888-4b44-8547-f4eb3796fad2 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.029696       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-attacher-qvmrk\" objectUID=d1bd7efa-7888-4b44-8547-f4eb3796fad2 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.069123       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpathplugin-69fcfd799\" objectUID=bb9d8259-798a-4add-a428-585a3085ac1a kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.069395       1 stateful_set.go:419] StatefulSet has been deleted volume-1868-2863/csi-hostpathplugin\nI0605 01:09:34.069439       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpathplugin-0\" objectUID=1f20868d-16a7-4339-9b08-91055a6f1ee0 kind=\"Pod\" virtual=false\nI0605 01:09:34.071076       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpathplugin-69fcfd799\" objectUID=bb9d8259-798a-4add-a428-585a3085ac1a kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.071411       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpathplugin-0\" objectUID=1f20868d-16a7-4339-9b08-91055a6f1ee0 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:34.090929       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-attacher-548bd59657\" objectUID=87b0e58c-d26a-4c32-ae57-97e757323b15 kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.091127       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5504-9930/csi-hostpath-attacher\nI0605 01:09:34.091202       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-attacher-0\" objectUID=e0b96396-43bd-4748-82d5-12584631c3fc kind=\"Pod\" virtual=false\nI0605 01:09:34.093296       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-attacher-548bd59657\" objectUID=87b0e58c-d26a-4c32-ae57-97e757323b15 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.093574       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-attacher-0\" objectUID=e0b96396-43bd-4748-82d5-12584631c3fc kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:34.103420       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.117.57).\nI0605 01:09:34.131080       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-provisioner-cpqsr\" objectUID=8e512fb9-e8eb-450f-a6cc-79f1826e955b kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.135220       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-provisioner-cpqsr\" objectUID=8e512fb9-e8eb-450f-a6cc-79f1826e955b kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.195502       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-provisioner-0\" objectUID=9d7fabb7-2f88-40e5-af77-8408a96abdda kind=\"Pod\" virtual=false\nI0605 01:09:34.202320       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-provisioner-0\" objectUID=9d7fabb7-2f88-40e5-af77-8408a96abdda kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:34.202535       1 stateful_set.go:419] StatefulSet has been deleted volume-1868-2863/csi-hostpath-provisioner\nI0605 01:09:34.202806       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-provisioner-f4bcb5858\" objectUID=f1243339-b564-4f47-8b8d-790386a3dca3 kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.209931       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-provisioner-f4bcb5858\" objectUID=f1243339-b564-4f47-8b8d-790386a3dca3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.214497       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpathplugin-6bw6r\" objectUID=cc04084e-aa3b-43ce-9540-c08796372630 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.223051       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpathplugin-6bw6r\" objectUID=cc04084e-aa3b-43ce-9540-c08796372630 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.257207       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-resizer-mz6mn\" objectUID=e0b34138-88a4-41c2-bb69-4a177eabbc58 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.261257       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-resizer-mz6mn\" objectUID=e0b34138-88a4-41c2-bb69-4a177eabbc58 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.287277       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpathplugin-0\" objectUID=ffc54b8f-5e20-4b03-91f8-c967dcdc56c9 kind=\"Pod\" virtual=false\nI0605 01:09:34.287509       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5504-9930/csi-hostpathplugin\nI0605 01:09:34.287540       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpathplugin-6997877c5\" objectUID=b9113cea-cb4f-4983-9750-1cf7560299d2 kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.289051       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpathplugin-6997877c5\" objectUID=b9113cea-cb4f-4983-9750-1cf7560299d2 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.289367       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpathplugin-0\" objectUID=ffc54b8f-5e20-4b03-91f8-c967dcdc56c9 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:34.308511       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-provisioner. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.44.231).\nI0605 01:09:34.322267       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-resizer-858c9fc5b\" objectUID=a8bd6601-6e7c-48be-8903-9ea01553842f kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.322598       1 stateful_set.go:419] StatefulSet has been deleted volume-1868-2863/csi-hostpath-resizer\nI0605 01:09:34.322651       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-resizer-0\" objectUID=7792c06a-130a-49dd-9a3c-3d5e6dfd4458 kind=\"Pod\" virtual=false\nI0605 01:09:34.326522       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-resizer-0\" objectUID=7792c06a-130a-49dd-9a3c-3d5e6dfd4458 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:34.326522       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-resizer-858c9fc5b\" objectUID=a8bd6601-6e7c-48be-8903-9ea01553842f kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.344631       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-provisioner-dnwjp\" objectUID=b6c4d1be-4023-48d4-be07-40ba1f32ff97 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.349400       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-provisioner-dnwjp\" objectUID=b6c4d1be-4023-48d4-be07-40ba1f32ff97 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.377428       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-snapshotter-pgvxk\" objectUID=b1c4f172-b263-46cd-b80f-f8bedfba3a64 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.379731       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-snapshotter-pgvxk\" objectUID=b1c4f172-b263-46cd-b80f-f8bedfba3a64 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.403536       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-provisioner-8484bdf99d\" objectUID=a5d148ec-e85d-4d7f-b409-2e4271aa470c kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.403776       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5504-9930/csi-hostpath-provisioner\nI0605 01:09:34.403805       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-provisioner-0\" objectUID=bf846401-b99c-4df9-9d3f-1e51cae7d301 kind=\"Pod\" virtual=false\nI0605 01:09:34.405192       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-provisioner-8484bdf99d\" objectUID=a5d148ec-e85d-4d7f-b409-2e4271aa470c kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.419562       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-provisioner-0\" objectUID=bf846401-b99c-4df9-9d3f-1e51cae7d301 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:34.438725       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-snapshotter-59585d64b8\" objectUID=6cb1f5c4-a0ff-4c51-8068-d32747d348d0 kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.438835       1 stateful_set.go:419] StatefulSet has been deleted volume-1868-2863/csi-hostpath-snapshotter\nI0605 01:09:34.438860       1 garbagecollector.go:471] \"Processing object\" object=\"volume-1868-2863/csi-hostpath-snapshotter-0\" objectUID=6ee30a77-a56a-4e6e-9b3b-09beae9e1eb9 kind=\"Pod\" virtual=false\nI0605 01:09:34.456280       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-resizer-blvv4\" objectUID=0a9c870a-840a-43ec-a567-9b26ae94a80c kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.502256       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.75).\nI0605 01:09:34.519755       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-resizer-549774b5d8\" objectUID=b5a59e1c-5bb0-4e8c-9e2c-57424d240d9a kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.519869       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5504-9930/csi-hostpath-resizer\nI0605 01:09:34.519894       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-resizer-0\" objectUID=df91fbe5-1b1b-4b1e-baf3-c2588f84add4 kind=\"Pod\" virtual=false\nI0605 01:09:34.540528       1 pvc_protection_controller.go:291] PVC fsgroupchangepolicy-7514/awstn5nv is unused\nI0605 01:09:34.549045       1 pv_controller.go:638] volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:09:34.553440       1 pv_controller.go:864] volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" entered phase \"Released\"\nI0605 01:09:34.554963       1 pv_controller.go:1326] isVolumeReleased[pvc-e93901f9-58af-4a1a-b781-c87943427d76]: volume is released\nI0605 01:09:34.569874       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-snapshotter-59585d64b8\" objectUID=6cb1f5c4-a0ff-4c51-8068-d32747d348d0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.574339       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-snapshotter-bv7ng\" objectUID=3a90252a-1dab-4323-ba8d-f7540d5528b4 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:34.619648       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-1868-2863/csi-hostpath-snapshotter-0\" objectUID=6ee30a77-a56a-4e6e-9b3b-09beae9e1eb9 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:34.637312       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-snapshotter-5df6495b64\" objectUID=b780617d-ab23-427c-a8ed-b110d87e317f kind=\"ControllerRevision\" virtual=false\nI0605 01:09:34.637415       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5504-9930/csi-hostpath-snapshotter\nI0605 01:09:34.637440       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-5504-9930/csi-hostpath-snapshotter-0\" objectUID=5f8ed2ce-aa4f-40d0-b010-4a2bbf366793 kind=\"Pod\" virtual=false\nI0605 01:09:34.671360       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-resizer-blvv4\" objectUID=0a9c870a-840a-43ec-a567-9b26ae94a80c kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.691548       1 aws_util.go:62] Error deleting EBS Disk volume aws://us-west-1a/vol-075d1b709a4d09719: error deleting EBS volume \"vol-075d1b709a4d09719\" since volume is currently attached to \"i-04b8aeda8cac6552a\"\nE0605 01:09:34.691611       1 goroutinemap.go:150] Operation for \"delete-pvc-e93901f9-58af-4a1a-b781-c87943427d76[f0eaaeda-5ee7-4421-bdbd-6efef610dcab]\" failed. No retries permitted until 2021-06-05 01:09:35.191590307 +0000 UTC m=+920.124120736 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-075d1b709a4d09719\\\" since volume is currently attached to \\\"i-04b8aeda8cac6552a\\\"\"\nI0605 01:09:34.691833       1 event.go:291] \"Event occurred\" object=\"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-075d1b709a4d09719\\\" since volume is currently attached to \\\"i-04b8aeda8cac6552a\\\"\"\nI0605 01:09:34.717641       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpathplugin. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.67.183.203).\nI0605 01:09:34.720044       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-resizer-549774b5d8\" objectUID=b5a59e1c-5bb0-4e8c-9e2c-57424d240d9a kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:34.768368       1 namespace_controller.go:185] Namespace has been deleted secrets-6948\nI0605 01:09:34.769625       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-resizer-0\" objectUID=df91fbe5-1b1b-4b1e-baf3-c2588f84add4 kind=\"Pod\" propagationPolicy=Background\nE0605 01:09:34.795449       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:09:34.869708       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-snapshotter-bv7ng\" objectUID=3a90252a-1dab-4323-ba8d-f7540d5528b4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:34.969977       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-snapshotter-5df6495b64\" objectUID=b780617d-ab23-427c-a8ed-b110d87e317f kind=\"ControllerRevision\" propagationPolicy=Background\nI0605 01:09:35.020768       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-5504-9930/csi-hostpath-snapshotter-0\" objectUID=5f8ed2ce-aa4f-40d0-b010-4a2bbf366793 kind=\"Pod\" propagationPolicy=Background\nI0605 01:09:35.115844       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-resizer. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.66.117.57).\nI0605 01:09:35.385680       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3226\nI0605 01:09:35.430956       1 namespace_controller.go:185] Namespace has been deleted resourcequota-8126\nI0605 01:09:35.501195       1 pv_controller.go:864] volume \"local-pv2mq62\" entered phase \"Available\"\nI0605 01:09:35.510104       1 utils.go:413] couldn't find ipfamilies for headless service: provisioning-2962-3992/csi-hostpath-snapshotter. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.65.68.75).\nI0605 01:09:35.550166       1 pv_controller.go:915] claim \"persistent-local-volumes-test-6223/pvc-w7hbh\" bound to volume \"local-pv2mq62\"\nI0605 01:09:35.555908       1 pv_controller.go:864] volume \"local-pv2mq62\" entered phase \"Bound\"\nI0605 01:09:35.555935       1 pv_controller.go:967] volume \"local-pv2mq62\" bound to claim \"persistent-local-volumes-test-6223/pvc-w7hbh\"\nI0605 01:09:35.560922       1 pv_controller.go:808] claim \"persistent-local-volumes-test-6223/pvc-w7hbh\" entered phase \"Bound\"\nI0605 01:09:35.801039       1 namespace_controller.go:185] Namespace has been deleted volumemode-7057\nI0605 01:09:35.817312       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-6223/pvc-w7hbh is unused\nI0605 01:09:35.822934       1 pv_controller.go:638] volume \"local-pv2mq62\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:35.826564       1 pv_controller.go:864] volume \"local-pv2mq62\" entered phase \"Released\"\nI0605 01:09:35.872755       1 pv_controller_base.go:504] deletion of claim \"persistent-local-volumes-test-6223/pvc-w7hbh\" was already processed\nI0605 01:09:36.216520       1 pvc_protection_controller.go:291] PVC volumemode-2861/pvc-ng8rh is unused\nI0605 01:09:36.222068       1 pv_controller.go:638] volume \"local-dj2dz\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:36.225012       1 pv_controller.go:864] volume \"local-dj2dz\" entered phase \"Released\"\nI0605 01:09:36.232062       1 namespace_controller.go:185] Namespace has been deleted projected-3052\nI0605 01:09:36.270010       1 pv_controller_base.go:504] deletion of claim \"volumemode-2861/pvc-ng8rh\" was already processed\nE0605 01:09:36.377811       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-3764/default: secrets \"default-token-sk6jh\" is forbidden: unable to create new content in namespace downward-api-3764 because it is being terminated\nE0605 01:09:36.423138       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2127/default: secrets \"default-token-wl9fj\" is forbidden: unable to create new content in namespace provisioning-2127 because it is being terminated\nI0605 01:09:36.640817       1 namespace_controller.go:185] Namespace has been deleted volume-5135\nI0605 01:09:36.739167       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:09:36.740764       1 operation_generator.go:1409] Verified volume is safe to detach for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:09:36.766351       1 namespace_controller.go:185] Namespace has been deleted volume-1868\nI0605 01:09:36.981344       1 namespace_controller.go:185] Namespace has been deleted provisioning-5504\nE0605 01:09:37.800037       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:09:38.125830       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-1176/test-cleanup-controller\" need=0 deleting=1\nI0605 01:09:38.125862       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-1176/test-cleanup-controller\" relatedReplicaSets=[test-cleanup-deployment-685c4f8568 test-cleanup-controller]\nI0605 01:09:38.125909       1 controller_utils.go:604] \"Deleting pod\" controller=\"test-cleanup-controller\" pod=\"deployment-1176/test-cleanup-controller-qjfbl\"\nI0605 01:09:38.126412       1 event.go:291] \"Event occurred\" object=\"deployment-1176/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-cleanup-controller to 0\"\nI0605 01:09:38.161547       1 event.go:291] \"Event occurred\" object=\"deployment-1176/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-cleanup-controller-qjfbl\"\nI0605 01:09:38.161721       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-1176/test-cleanup-controller-qjfbl\" objectUID=298bcfc9-b239-4df4-a3b6-766319302d5b kind=\"CiliumEndpoint\" virtual=false\nI0605 01:09:38.210179       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-1176/test-cleanup-controller-qjfbl\" objectUID=298bcfc9-b239-4df4-a3b6-766319302d5b kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0605 01:09:38.651716       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-7402/default: secrets \"default-token-qkgmz\" is forbidden: unable to create new content in namespace configmap-7402 because it is being terminated\nI0605 01:09:38.770568       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-9910/test-rolling-update-with-lb-5b74d4d4b5\" need=3 creating=3\nI0605 01:09:38.771307       1 event.go:291] \"Event occurred\" object=\"deployment-9910/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5b74d4d4b5 to 3\"\nI0605 01:09:38.783125       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9910/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0605 01:09:38.784188       1 event.go:291] \"Event occurred\" object=\"deployment-9910/test-rolling-update-with-lb-5b74d4d4b5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5b74d4d4b5-8rvj7\"\nI0605 01:09:38.799887       1 event.go:291] \"Event occurred\" object=\"deployment-9910/test-rolling-update-with-lb-5b74d4d4b5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5b74d4d4b5-7qk8m\"\nI0605 01:09:38.799920       1 event.go:291] \"Event occurred\" object=\"deployment-9910/test-rolling-update-with-lb-5b74d4d4b5\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5b74d4d4b5-pwpfc\"\nE0605 01:09:38.930953       1 pv_controller.go:1437] error finding provisioning plugin for claim provisioning-2169/pvc-gcbhj: storageclass.storage.k8s.io \"provisioning-2169\" not found\nI0605 01:09:38.931238       1 event.go:291] \"Event occurred\" object=\"provisioning-2169/pvc-gcbhj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2169\\\" not found\"\nI0605 01:09:38.986221       1 pv_controller.go:864] volume \"local-6vhvg\" entered phase \"Available\"\nE0605 01:09:39.466488       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-5165/default: secrets \"default-token-tbmqk\" is forbidden: unable to create new content in namespace kubectl-5165 because it is being terminated\nI0605 01:09:39.629253       1 utils.go:413] couldn't find ipfamilies for headless service: endpointslice-7058/example-empty-selector. This could happen if controller manager is connected to an old apiserver that does not support ip families yet. EndpointSlices for this Service will use IPv4 as the IP Family based on familyOf(ClusterIP:100.70.162.213).\nI0605 01:09:39.815178       1 pvc_protection_controller.go:291] PVC provisioning-9000/pvc-2j2pl is unused\nI0605 01:09:39.933211       1 pv_controller.go:638] volume \"local-vh2nz\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:39.938389       1 pvc_protection_controller.go:291] PVC pv-9623/pvc-c5bnn is unused\nI0605 01:09:39.984292       1 pv_controller.go:638] volume \"local-vh2nz\" is released and reclaim policy \"Retain\" will be executed\nI0605 01:09:39.996590       1 pv_controller.go:864] volume \"local-vh2nz\" entered phase \"Released\"\nI0605 01:09:39.996823       1 garbagecollector.go:471] \"Processing object\" object=\"endpointslice-7058/example-empty-selector-v276r\" objectUID=cb89be45-0dad-446a-9917-c3144010f696 kind=\"EndpointSlice\" virtual=false\nI0605 01:09:40.009188       1 pv_controller.go:638] volume \"nfs-nwncc\" is released and reclaim policy \"Recycle\" will be executed\nI0605 01:09:40.009462       1 garbagecollector.go:580] \"Deleting object\" object=\"endpointslice-7058/example-empty-selector-v276r\" objectUID=cb89be45-0dad-446a-9917-c3144010f696 kind=\"EndpointSlice\" propagationPolicy=Background\nI0605 01:09:40.016247       1 pv_controller.go:864] volume \"nfs-nwncc\" entered phase \"Released\"\nI0605 01:09:40.024694       1 pv_controller_base.go:504] deletion of claim \"provisioning-9000/pvc-2j2pl\" was already processed\nI0605 01:09:40.029162       1 pv_controller.go:1326] isVolumeReleased[nfs-nwncc]: volume is released\nI0605 01:09:40.089537       1 event.go:291] \"Event occurred\" object=\"nfs-nwncc\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully assigned default/recycler-for-nfs-nwncc to ip-172-20-52-198.us-west-1.compute.internal\"\nE0605 01:09:40.647033       1 tokens_controller.go:262] error synchronizing serviceaccount volume-1868-2863/default: secrets \"default-token-gcqj6\" is forbidden: unable to create new content in namespace volume-1868-2863 because it is being terminated\nI0605 01:09:40.986419       1 event.go:291] \"Event occurred\" object=\"nfs-nwncc\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Pulling image \\\"busybox:1.27\\\"\"\nE0605 01:09:41.231372       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0605 01:09:41.263669       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-9010/test-rolling-update-controller\" need=1 creating=1\nI0605 01:09:41.270317       1 event.go:291] \"Event occurred\" object=\"deployment-9010/test-rolling-update-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-controller-zs4l5\"\nI0605 01:09:41.475244       1 namespace_controller.go:185] Namespace has been deleted downward-api-3764\nI0605 01:09:41.590307       1 namespace_controller.go:185] Namespace has been deleted provisioning-2127\nI0605 01:09:41.658533       1 namespace_controller.go:185] Namespace has been deleted emptydir-577\nI0605 01:09:42.082658       1 operation_generator.go:470] DetachVolume.Detach succeeded for volume \"pvc-e93901f9-58af-4a1a-b781-c87943427d76\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1a/vol-075d1b709a4d09719\") on node \"ip-172-20-52-198.us-west-1.compute.internal\" \nI0605 01:09:42.542335       1 event.go:291] \"Event occurred\" object=\"nfs-nwncc\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Successfully pulled image \\\"busybox:1.27\\\" in 1.555695441s\"\nI0605 01:09:42.575505       1 event.go:291] \"Event occurred\" object=\"nfs-nwncc\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Created container pv-recycler\"\nI0605 01:09:42.677722       1 event.go:291] \"Event occurred\" object=\"nfs-nwncc\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"RecyclerPod\" message=\"Recycler pod: Started container pv-recycler\"\nE0605 01:09:42.932608       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-6223/default: secrets \"default-token-p7g7d\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-6223 because it is being terminated\nI0605 01:09:43.119363       1 recycler_client.go:89] deleting recycler pod default/recycler-for-nfs-nwncc\nI0605 01:09:43.131758       1 pv_controller.go:1199] volume \"nfs-nwncc\" recycled\nI0605 01:09:43.132154       1 event.go:291] \"Event occurred\" object=\"nfs-nwncc\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeRecycled\" message=\"Volume recycled\"\nI0605 01:09:43.140802       1 pv_controller.go:864] volume \"nfs-nwncc\" entered phase \"Available\"\nE0605 01:09:43.339058       1 tokens_controller.go:262] error synchronizing serviceaccount projected-7822/default: secrets \"default-token-fmwkj\" is forbidden: unable to create new content in namespace projected-7822 because it is being terminated\nI0605 01:09:43.399442       1 pvc_protection_controller.go:291] PVC volume-expand-6947/csi-hostpathfpj5m is unused\nI0605 01:09:43.405598       1 pv_controller.go:638] volume \"pvc-38acfff7-4394-47ba-9e75-e9e2ca686cf7\" is released and reclaim policy \"Delete\" will be executed\nI0605 01:09:43.408429       1 pv_controller.go:864] volume \"pvc-38acfff7-4394-47ba-9e75-e9e2ca686cf7\" entered phase \"Released\"\nI0605 01:09:43.411703       1 pv_controller.go:1326] isVolumeReleased[pvc-38acfff7-4394-47ba-9e75-e9e2ca686cf7]: volume is released\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-50-5.us-west-1.compute.internal ====\n==== START logs for container kube-scheduler of pod kube-system/kube-scheduler-ip-172-20-50-5.us-west-1.compute.internal ====\nI0605 00:54:16.213052       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0605 00:54:16.213445       1 flags.go:59] FLAG: --address=\"0.0.0.0\"\nI0605 00:54:16.213457       1 flags.go:59] FLAG: --algorithm-provider=\"\"\nI0605 00:54:16.213463       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0605 00:54:16.213468       1 flags.go:59] FLAG: --authentication-kubeconfig=\"\"\nI0605 00:54:16.213473       1 flags.go:59] FLAG: --authentication-skip-lookup=\"false\"\nI0605 00:54:16.213488       1 flags.go:59] FLAG: --authentication-token-webhook-cache-ttl=\"10s\"\nI0605 00:54:16.213496       1 flags.go:59] FLAG: --authentication-tolerate-lookup-failure=\"true\"\nI0605 00:54:16.213501       1 flags.go:59] FLAG: --authorization-always-allow-paths=\"[/healthz]\"\nI0605 00:54:16.213555       1 flags.go:59] FLAG: --authorization-kubeconfig=\"\"\nI0605 00:54:16.213560       1 flags.go:59] FLAG: --authorization-webhook-cache-authorized-ttl=\"10s\"\nI0605 00:54:16.213565       1 flags.go:59] FLAG: --authorization-webhook-cache-unauthorized-ttl=\"10s\"\nI0605 00:54:16.213571       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0605 00:54:16.213582       1 flags.go:59] FLAG: --cert-dir=\"\"\nI0605 00:54:16.213587       1 flags.go:59] FLAG: --client-ca-file=\"\"\nI0605 00:54:16.213591       1 flags.go:59] FLAG: --config=\"/var/lib/kube-scheduler/config.yaml\"\nI0605 00:54:16.213597       1 flags.go:59] FLAG: --contention-profiling=\"true\"\nI0605 00:54:16.213603       1 flags.go:59] FLAG: --experimental-logging-sanitization=\"false\"\nI0605 00:54:16.213607       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0605 00:54:16.213615       1 flags.go:59] FLAG: --hard-pod-affinity-symmetric-weight=\"1\"\nI0605 00:54:16.213626       1 flags.go:59] FLAG: --help=\"false\"\nI0605 00:54:16.213631       1 flags.go:59] FLAG: --http2-max-streams-per-connection=\"0\"\nI0605 00:54:16.213637       1 flags.go:59] FLAG: --kube-api-burst=\"100\"\nI0605 00:54:16.213643       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0605 00:54:16.213652       1 flags.go:59] FLAG: --kube-api-qps=\"50\"\nI0605 00:54:16.213661       1 flags.go:59] FLAG: --kubeconfig=\"\"\nI0605 00:54:16.213665       1 flags.go:59] FLAG: --leader-elect=\"true\"\nI0605 00:54:16.213673       1 flags.go:59] FLAG: --leader-elect-lease-duration=\"15s\"\nI0605 00:54:16.213678       1 flags.go:59] FLAG: --leader-elect-renew-deadline=\"10s\"\nI0605 00:54:16.213683       1 flags.go:59] FLAG: --leader-elect-resource-lock=\"leases\"\nI0605 00:54:16.213688       1 flags.go:59] FLAG: --leader-elect-resource-name=\"kube-scheduler\"\nI0605 00:54:16.213693       1 flags.go:59] FLAG: --leader-elect-resource-namespace=\"kube-system\"\nI0605 00:54:16.213699       1 flags.go:59] FLAG: --leader-elect-retry-period=\"2s\"\nI0605 00:54:16.213704       1 flags.go:59] FLAG: --lock-object-name=\"kube-scheduler\"\nI0605 00:54:16.213712       1 flags.go:59] FLAG: --lock-object-namespace=\"kube-system\"\nI0605 00:54:16.213718       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0605 00:54:16.213726       1 flags.go:59] FLAG: --log-dir=\"\"\nI0605 00:54:16.213731       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-scheduler.log\"\nI0605 00:54:16.213737       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0605 00:54:16.213742       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0605 00:54:16.213747       1 flags.go:59] FLAG: --logging-format=\"text\"\nI0605 00:54:16.213756       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0605 00:54:16.213760       1 flags.go:59] FLAG: --master=\"\"\nI0605 00:54:16.213765       1 flags.go:59] FLAG: --one-output=\"false\"\nI0605 00:54:16.213784       1 flags.go:59] FLAG: --permit-port-sharing=\"false\"\nI0605 00:54:16.213789       1 flags.go:59] FLAG: --policy-config-file=\"\"\nI0605 00:54:16.213794       1 flags.go:59] FLAG: --policy-configmap=\"\"\nI0605 00:54:16.213798       1 flags.go:59] FLAG: --policy-configmap-namespace=\"kube-system\"\nI0605 00:54:16.213807       1 flags.go:59] FLAG: --port=\"10251\"\nI0605 00:54:16.213812       1 flags.go:59] FLAG: --profiling=\"true\"\nI0605 00:54:16.213817       1 flags.go:59] FLAG: --requestheader-allowed-names=\"[]\"\nI0605 00:54:16.213822       1 flags.go:59] FLAG: --requestheader-client-ca-file=\"\"\nI0605 00:54:16.213828       1 flags.go:59] FLAG: --requestheader-extra-headers-prefix=\"[x-remote-extra-]\"\nI0605 00:54:16.213837       1 flags.go:59] FLAG: --requestheader-group-headers=\"[x-remote-group]\"\nI0605 00:54:16.213843       1 flags.go:59] FLAG: --requestheader-username-headers=\"[x-remote-user]\"\nI0605 00:54:16.213949       1 flags.go:59] FLAG: --scheduler-name=\"default-scheduler\"\nI0605 00:54:16.213955       1 flags.go:59] FLAG: --secure-port=\"10259\"\nI0605 00:54:16.213960       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0605 00:54:16.213965       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0605 00:54:16.213969       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0605 00:54:16.213975       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0605 00:54:16.213979       1 flags.go:59] FLAG: --tls-cert-file=\"\"\nI0605 00:54:16.214006       1 flags.go:59] FLAG: --tls-cipher-suites=\"[]\"\nI0605 00:54:16.214012       1 flags.go:59] FLAG: --tls-min-version=\"\"\nI0605 00:54:16.214017       1 flags.go:59] FLAG: --tls-private-key-file=\"\"\nI0605 00:54:16.214021       1 flags.go:59] FLAG: --tls-sni-cert-key=\"[]\"\nI0605 00:54:16.214029       1 flags.go:59] FLAG: --use-legacy-policy-config=\"false\"\nI0605 00:54:16.214034       1 flags.go:59] FLAG: --v=\"2\"\nI0605 00:54:16.214038       1 flags.go:59] FLAG: --version=\"false\"\nI0605 00:54:16.214051       1 flags.go:59] FLAG: --vmodule=\"\"\nI0605 00:54:16.214056       1 flags.go:59] FLAG: --write-config-to=\"\"\nI0605 00:54:17.205117       1 serving.go:331] Generated self-signed cert in-memory\nW0605 00:54:17.580555       1 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.\nW0605 00:54:17.580576       1 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.\nW0605 00:54:17.580589       1 authorization.go:176] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.\nI0605 00:54:27.600945       1 factory.go:187] Creating scheduler from algorithm provider 'DefaultProvider'\nI0605 00:54:27.606911       1 configfile.go:72] Using component config:\napiVersion: kubescheduler.config.k8s.io/v1beta1\nclientConnection:\n  acceptContentTypes: \"\"\n  burst: 100\n  contentType: application/vnd.kubernetes.protobuf\n  kubeconfig: /var/lib/kube-scheduler/kubeconfig\n  qps: 50\nenableContentionProfiling: true\nenableProfiling: true\nhealthzBindAddress: 0.0.0.0:10251\nkind: KubeSchedulerConfiguration\nleaderElection:\n  leaderElect: true\n  leaseDuration: 15s\n  renewDeadline: 10s\n  resourceLock: leases\n  resourceName: kube-scheduler\n  resourceNamespace: kube-system\n  retryPeriod: 2s\nmetricsBindAddress: 0.0.0.0:10251\nparallelism: 16\npercentageOfNodesToScore: 0\npodInitialBackoffSeconds: 1\npodMaxBackoffSeconds: 10\nprofiles:\n- pluginConfig:\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: DefaultPreemptionArgs\n      minCandidateNodesAbsolute: 100\n      minCandidateNodesPercentage: 10\n    name: DefaultPreemption\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      hardPodAffinityWeight: 1\n      kind: InterPodAffinityArgs\n    name: InterPodAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeAffinityArgs\n    name: NodeAffinity\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesFitArgs\n    name: NodeResourcesFit\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      kind: NodeResourcesLeastAllocatedArgs\n      resources:\n      - name: cpu\n        weight: 1\n      - name: memory\n        weight: 1\n    name: NodeResourcesLeastAllocated\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      defaultingType: System\n      kind: PodTopologySpreadArgs\n    name: PodTopologySpread\n  - args:\n      apiVersion: kubescheduler.config.k8s.io/v1beta1\n      bindTimeoutSeconds: 600\n      kind: VolumeBindingArgs\n    name: VolumeBinding\n  plugins:\n    bind:\n      enabled:\n      - name: DefaultBinder\n        weight: 0\n    filter:\n      enabled:\n      - name: NodeUnschedulable\n        weight: 0\n      - name: NodeName\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n      - name: NodeAffinity\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: NodeResourcesFit\n        weight: 0\n      - name: VolumeRestrictions\n        weight: 0\n      - name: EBSLimits\n        weight: 0\n      - name: GCEPDLimits\n        weight: 0\n      - name: NodeVolumeLimits\n        weight: 0\n      - name: AzureDiskLimits\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n      - name: VolumeZone\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n    permit: {}\n    postBind: {}\n    postFilter:\n      enabled:\n      - name: DefaultPreemption\n        weight: 0\n    preBind:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    preFilter:\n      enabled:\n      - name: NodeResourcesFit\n        weight: 0\n      - name: NodePorts\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: InterPodAffinity\n        weight: 0\n      - name: VolumeBinding\n        weight: 0\n    preScore:\n      enabled:\n      - name: InterPodAffinity\n        weight: 0\n      - name: PodTopologySpread\n        weight: 0\n      - name: TaintToleration\n        weight: 0\n    queueSort:\n      enabled:\n      - name: PrioritySort\n        weight: 0\n    reserve:\n      enabled:\n      - name: VolumeBinding\n        weight: 0\n    score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n\nI0605 00:54:27.606933       1 server.go:138] Starting Kubernetes Scheduler version v1.20.7\nW0605 00:54:27.608861       1 authorization.go:47] Authorization is disabled\nW0605 00:54:27.608874       1 authentication.go:40] Authentication is disabled\nI0605 00:54:27.608884       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251\nI0605 00:54:27.609924       1 tlsconfig.go:200] loaded serving cert [\"Generated self signed cert\"]: \"localhost@1622854457\" [serving] validServingFor=[127.0.0.1,localhost,localhost] issuer=\"localhost-ca@1622854456\" (2021-06-04 23:54:16 +0000 UTC to 2022-06-04 23:54:16 +0000 UTC (now=2021-06-05 00:54:27.609911871 +0000 UTC))\nI0605 00:54:27.610107       1 named_certificates.go:53] loaded SNI cert [0/\"self-signed loopback\"]: \"apiserver-loopback-client@1622854457\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1622854457\" (2021-06-04 23:54:17 +0000 UTC to 2022-06-04 23:54:17 +0000 UTC (now=2021-06-05 00:54:27.610099774 +0000 UTC))\nI0605 00:54:27.610133       1 secure_serving.go:197] Serving securely on [::]:10259\nI0605 00:54:27.610185       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0605 00:54:27.611621       1 reflector.go:219] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.611644       1 reflector.go:219] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.614210       1 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.614447       1 reflector.go:219] Starting reflector *v1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.614669       1 reflector.go:219] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.615133       1 reflector.go:219] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.615355       1 reflector.go:219] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.615576       1 reflector.go:219] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.615781       1 reflector.go:219] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.616016       1 reflector.go:219] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:27.616225       1 reflector.go:219] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134\nI0605 00:54:46.377865       1 trace.go:205] Trace[708631108]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.616) (total time: 18761ms):\nTrace[708631108]: [18.761809025s] [18.761809025s] END\nE0605 00:54:46.377890       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope\nI0605 00:54:46.378062       1 trace.go:205] Trace[69371988]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.614) (total time: 18763ms):\nTrace[69371988]: [18.763590266s] [18.763590266s] END\nE0605 00:54:46.378075       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope\nI0605 00:54:46.378180       1 trace.go:205] Trace[1040013247]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.611) (total time: 18766ms):\nTrace[1040013247]: [18.766516376s] [18.766516376s] END\nE0605 00:54:46.378190       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope\nI0605 00:54:46.378274       1 trace.go:205] Trace[1144761439]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.614) (total time: 18764ms):\nTrace[1144761439]: [18.764039986s] [18.764039986s] END\nE0605 00:54:46.378283       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nI0605 00:54:46.378377       1 trace.go:205] Trace[1400071818]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.614) (total time: 18763ms):\nTrace[1400071818]: [18.763675141s] [18.763675141s] END\nE0605 00:54:46.378385       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope\nI0605 00:54:46.378467       1 trace.go:205] Trace[1637907335]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.611) (total time: 18766ms):\nTrace[1637907335]: [18.766821741s] [18.766821741s] END\nE0605 00:54:46.378476       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope\nI0605 00:54:46.378558       1 trace.go:205] Trace[501114711]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.616) (total time: 18762ms):\nTrace[501114711]: [18.762313316s] [18.762313316s] END\nE0605 00:54:46.378567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nI0605 00:54:46.378644       1 trace.go:205] Trace[1325595940]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.615) (total time: 18763ms):\nTrace[1325595940]: [18.76348939s] [18.76348939s] END\nE0605 00:54:46.378667       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope\nI0605 00:54:46.378769       1 trace.go:205] Trace[1947432532]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.615) (total time: 18763ms):\nTrace[1947432532]: [18.763172477s] [18.763172477s] END\nE0605 00:54:46.378778       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope\nI0605 00:54:46.378862       1 trace.go:205] Trace[378765767]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.615) (total time: 18763ms):\nTrace[378765767]: [18.763485187s] [18.763485187s] END\nE0605 00:54:46.378869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope\nI0605 00:54:46.378944       1 trace.go:205] Trace[1191213789]: \"Reflector ListAndWatch\" name:k8s.io/client-go/informers/factory.go:134 (05-Jun-2021 00:54:27.615) (total time: 18763ms):\nTrace[1191213789]: [18.763138791s] [18.763138791s] END\nE0605 00:54:46.378965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope\nE0605 00:54:47.253706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope\nE0605 00:54:47.313645       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope\nI0605 00:54:47.644151       1 node_tree.go:65] Added node \"ip-172-20-50-5.us-west-1.compute.internal\" in group \"us-west-1:\\x00:us-west-1a\" to NodeTree\nI0605 00:54:50.411875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler...\nI0605 00:54:50.416839       1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler\nI0605 00:55:04.068438       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-mmsj6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:55:04.132655       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6gfkx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:55:04.145375       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-kr4s6\" node=\"ip-172-20-50-5.us-west-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0605 00:55:04.145709       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/dns-controller-5f98b58844-wvlm6\" node=\"ip-172-20-50-5.us-west-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0605 00:55:04.145982       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-mmsj6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:55:04.146598       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-operator-79f9ffb4-vnd5j\" node=\"ip-172-20-50-5.us-west-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0605 00:55:04.146894       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6gfkx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:55:16.564145       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-mmsj6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:55:16.564383       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6gfkx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:55:16.591835       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/kops-controller-x7h4v\" node=\"ip-172-20-50-5.us-west-1.compute.internal\" evaluatedNodes=1 feasibleNodes=1\nI0605 00:55:21.425327       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-mmsj6\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:55:21.425497       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6gfkx\" err=\"0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.\"\nI0605 00:56:33.541299       1 node_tree.go:65] Added node \"ip-172-20-35-190.us-west-1.compute.internal\" in group \"us-west-1:\\x00:us-west-1a\" to NodeTree\nI0605 00:56:33.541561       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-mmsj6\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0605 00:56:33.566772       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6gfkx\" err=\"0/2 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0605 00:56:33.605239       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-tzlgz\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=2 feasibleNodes=1\nI0605 00:56:33.680894       1 node_tree.go:65] Added node \"ip-172-20-52-198.us-west-1.compute.internal\" in group \"us-west-1:\\x00:us-west-1a\" to NodeTree\nI0605 00:56:33.732473       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-fmbqf\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=3 feasibleNodes=1\nI0605 00:56:35.741490       1 node_tree.go:65] Added node \"ip-172-20-56-177.us-west-1.compute.internal\" in group \"us-west-1:\\x00:us-west-1a\" to NodeTree\nI0605 00:56:35.782240       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-hmvm8\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=4 feasibleNodes=1\nI0605 00:56:35.899054       1 node_tree.go:65] Added node \"ip-172-20-63-110.us-west-1.compute.internal\" in group \"us-west-1:\\x00:us-west-1a\" to NodeTree\nI0605 00:56:35.973489       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/cilium-cw2mw\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:56:44.492368       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-8f5559c9b-mmsj6\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0605 00:56:44.519603       1 factory.go:321] \"Unable to schedule pod; no fit; waiting\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6gfkx\" err=\"0/5 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 4 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\"\nI0605 00:56:54.505139       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-8f5559c9b-mmsj6\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0605 00:56:55.506196       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-autoscaler-6f594f4c58-6gfkx\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=2\nI0605 00:57:07.807593       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"kube-system/coredns-8f5559c9b-7sf75\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.236517       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"projected-7426/downwardapi-volume-600490fd-40c8-43a9-af8d-5d97ea21c1b6\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.355078       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-9725/image-pull-teste88857cf-2211-4297-bb63-d1d887a823a3\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.428941       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-cx45m\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.464890       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-nk9v4\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.468868       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-4z76g\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.481303       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-mmnlg\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.489779       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-jljq4\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.489858       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-dphxf\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.514464       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-393/pod-0\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.514565       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-9447/alpine-nnp-false-9672294c-0a9d-4e20-b432-b82457c45227\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.514651       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-8652/termination-message-container4d601a62-9658-4b52-bdbd-fa500ea4b4c0\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.560139       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-671/hostexec-ip-172-20-52-198.us-west-1.compute.internal-76jg9\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:38.586205       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-393/pod-1\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.597364       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1941/hostexec-ip-172-20-35-190.us-west-1.compute.internal-ndnkt\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:38.623318       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9390/hostexec-ip-172-20-35-190.us-west-1.compute.internal-zgzpc\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:38.637946       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"disruption-393/pod-2\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.660919       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-vcpsg\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.664517       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-nzmhn\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.694293       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-4b9cf\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:38.786126       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-zcgn8\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:39.033837       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-9764/test-webserver-317b3fae-b838-4867-99ec-ffafef954e07\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:39.058572       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-160/hostexec-ip-172-20-52-198.us-west-1.compute.internal-7gwtw\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:39.081528       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-probe-462/liveness-9b4968a6-997a-4d9c-8651-8c90c7c292d2\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:39.505443       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"secrets-3263/pod-secrets-b6baade9-d9a9-4104-b9f9-569bc244e4a9\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:39.891365       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-8466/nfs-server\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:39.938559       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4039/hostexec-ip-172-20-35-190.us-west-1.compute.internal-b5k4j\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:40.479596       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5772/test-recreate-deployment-786dd7c454-f7kq5\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.584627       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-xmfpw\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.617809       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-n2xzn\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.618518       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-db4w6\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.658016       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-qf79j\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.658376       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"prestop-9122/server\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.658446       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-jhn66\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.658528       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-6fnq7\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.684851       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-xpg22\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.684940       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-x82zg\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.702518       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-87dbm\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.702611       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-dd94f59b7-fkgx8\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:40.797842       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-5252/pod-subpath-test-inlinevolume-jj6l\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:40.819277       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7004/hostexec-ip-172-20-35-190.us-west-1.compute.internal-xq992\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:41.127364       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"security-context-test-127/implicit-nonroot-uid\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:41.799647       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-214-7424/csi-mockplugin-0\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:41.835050       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-214-7424/csi-mockplugin-attacher-0\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:43.322386       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-7515/hostexec-ip-172-20-52-198.us-west-1.compute.internal-vttn2\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:43.893287       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-lnrtt\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:43.960275       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-mcxgs\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:44.016527       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-qkn49\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:44.074876       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-2k82j\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:44.279850       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-t2b2w\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:44.344161       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-zhmkr\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:44.415942       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-bdbw9\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:44.478932       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-xtx89\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:44.761034       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-6420/pvc-volume-tester-writer-ssp5l\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:45.601638       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-9798/termination-message-containera9f18a6a-eece-48ba-a30b-8599e95975d3\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:46.883241       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-kmfqs\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:46.944317       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-qwdlf\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:47.145294       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4834-325/csi-hostpath-attacher-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:47.321455       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4834-325/csi-hostpathplugin-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:47.422196       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4834-325/csi-hostpath-provisioner-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:47.530054       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4834-325/csi-hostpath-resizer-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:47.647191       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-expand-4834-325/csi-hostpath-snapshotter-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:48.346660       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-9550/external-provisioner-zbb6c\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:48.841724       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"prestop-9122/tester\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:50.116656       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-671/pod-subpath-test-preprovisionedpv-n5wg\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:50.403583       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-1941/pod-subpath-test-preprovisionedpv-pkm6\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:52.009697       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pods-5446/pod-hostip-4d1c78dc-90a3-48cc-9db0-409f993d8eac\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:52.099425       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-9390/pod-f0108422-d7ea-47e2-9480-4f16c17eec1f\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:52.434385       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-8qpxd\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:52.518356       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-cdjlg\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:52.624261       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-4mrl8\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:52.846342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-zlnnc\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:52.985342       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-dgp6d\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:53.113216       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-x7nsk\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:54.414057       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-w6cc7\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:54.508309       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-69mqc\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:55.745387       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-zk96x\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:55.808284       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-zl5hj\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:55.870930       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-rctt8\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:56.050129       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-6f77887984-b9q29\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:56.103052       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-6f77887984-fbkqn\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:56.267425       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7004/pod-79acf3f8-a20e-4fe8-b787-97c09b5a3274\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:56.882294       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5772/test-recreate-deployment-f79dd4667-w92vb\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:57.670922       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"container-runtime-7235/terminate-cmd-rpa9eb890bc-50f7-4b48-931a-b1557997418d\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:58.121330       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-6f77887984-khk2f\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 00:59:58.546883       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2194-5608/csi-mockplugin-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:58.595217       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2194-5608/csi-mockplugin-attacher-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:58.647952       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"csi-mock-volumes-2194-5608/csi-mockplugin-resizer-0\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 00:59:59.255188       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-6f77887984-bwnq2\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:00.219850       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-prf9r\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:00.230551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-bbnz4\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:00.297922       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-s2hvl\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:00.369700       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-dd94f59b7-br7jw\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:02.415486       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-462mp\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:03.854376       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-b2dbs\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:04.294134       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-6841/exec-volume-test-preprovisionedpv-f4s5\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:04.571738       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-6f77887984-42sdw\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:04.665813       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-6f77887984-5s58h\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:04.719199       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-5wd6j\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:04.853400       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"volume-160/local-injector\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 01:00:05.084518       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-4039/pod-subpath-test-preprovisionedpv-jl9c\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 01:00:05.123867       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-795d758f88-cq7lv\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:05.138757       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-795d758f88-ngsvh\" node=\"ip-172-20-52-198.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:05.139122       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-795d758f88-5nmsr\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:05.273911       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"pv-8466/pvc-tester-cx527\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:05.420098       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-795d758f88-zf55c\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:05.469551       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-5276/webserver-deployment-795d758f88-4scq9\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:05.617476       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"persistent-local-volumes-test-7004/pod-e25c3bfa-5780-41fb-a910-a67f2565550f\" node=\"ip-172-20-35-190.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 01:00:05.954326       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-6f77887984-7q5rx\" node=\"ip-172-20-56-177.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=4\nI0605 01:00:05.980514       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"provisioning-3002/pod-subpath-test-inlinevolume-n9x2\" node=\"ip-172-20-63-110.us-west-1.compute.internal\" evaluatedNodes=5 feasibleNodes=1\nI0605 01:00:06.625851       1 scheduler.go:604] \"Successfully bound pod to node\" pod=\"deployment-9432/webserver-7bc44776fd-lfdlq\" node=\"ip-172-20-56-177.u