This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-05-23 06:18
Elapsed29m58s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0523 06:19:27.717557    4010 up.go:43] Cleaning up any leaked resources from previous cluster
I0523 06:19:27.717590    4010 dumplogs.go:38] /logs/artifacts/a9e4d1c0-bb8e-11eb-b027-f2836e8f0ab3/kops toolbox dump --name e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ec2-user
I0523 06:19:27.732029    4031 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0523 06:19:27.732108    4031 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

Cluster.kops.k8s.io "e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io" not found
W0523 06:19:28.229364    4010 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0523 06:19:28.229436    4010 down.go:48] /logs/artifacts/a9e4d1c0-bb8e-11eb-b027-f2836e8f0ab3/kops delete cluster --name e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --yes
I0523 06:19:28.245485    4041 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0523 06:19:28.245678    4041 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io" not found
I0523 06:19:28.718475    4010 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/05/23 06:19:28 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0523 06:19:28.729893    4010 http.go:37] curl https://ip.jsb.workers.dev
I0523 06:19:28.841592    4010 up.go:144] /logs/artifacts/a9e4d1c0-bb8e-11eb-b027-f2836e8f0ab3/kops create cluster --name e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.19.11 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=309956199498/RHEL-8.3_HVM-20210209-x86_64-0-Hourly2-GP2 --channel=alpha --networking=cilium --container-runtime=docker --admin-access 35.192.111.110/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones ca-central-1a --master-size c5.large
I0523 06:19:28.858255    4051 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0523 06:19:28.858467    4051 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
I0523 06:19:28.902191    4051 create_cluster.go:728] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0523 06:19:29.392265    4051 new_cluster.go:1011]  Cloud Provider ID = aws
... skipping 42 lines ...

I0523 06:19:52.903515    4010 up.go:181] /logs/artifacts/a9e4d1c0-bb8e-11eb-b027-f2836e8f0ab3/kops validate cluster --name e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0523 06:19:52.917886    4072 featureflag.go:165] FeatureFlag "SpecOverrideFlag"=true
I0523 06:19:52.917969    4072 featureflag.go:165] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io

W0523 06:19:53.893062    4072 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:20:03.930877    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:20:13.967337    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:20:24.010879    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:20:34.053312    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:20:44.093065    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:20:54.126509    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:21:04.173669    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:21:14.221538    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:21:24.271298    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:21:34.305641    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:21:44.351090    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:21:54.384717    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
W0523 06:22:04.407084    4072 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:22:14.439041    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:22:24.474032    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:22:34.511538    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:22:44.550056    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:22:54.584802    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:23:04.634475    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:23:14.668098    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:23:24.696753    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:23:34.744676    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:23:44.776062    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:23:54.810412    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:24:04.841062    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:24:14.890848    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
W0523 06:24:24.913668    4072 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:24:34.943596    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:24:44.986125    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:24:55.019854    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:25:05.052463    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:25:15.089307    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:25:25.132243    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0523 06:25:35.169987    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 8 lines ...
Machine	i-03e33b3471bcf6e9f					machine "i-03e33b3471bcf6e9f" has not yet joined cluster
Machine	i-0ec4cc948b7b1f9be					machine "i-0ec4cc948b7b1f9be" has not yet joined cluster
Pod	kube-system/cilium-vmkmw				system-node-critical pod "cilium-vmkmw" is not ready (cilium-agent)
Pod	kube-system/kube-dns-696cb84c7-jwk9v			system-cluster-critical pod "kube-dns-696cb84c7-jwk9v" is pending
Pod	kube-system/kube-dns-autoscaler-55f8f75459-m2cm5	system-cluster-critical pod "kube-dns-autoscaler-55f8f75459-m2cm5" is pending

Validation Failed
W0523 06:25:46.500174    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 10 lines ...
Node	ip-172-20-52-132.ca-central-1.compute.internal		node "ip-172-20-52-132.ca-central-1.compute.internal" of role "node" is not ready
Pod	kube-system/cilium-nfrh7				system-node-critical pod "cilium-nfrh7" is pending
Pod	kube-system/cilium-vmkmw				system-node-critical pod "cilium-vmkmw" is not ready (cilium-agent)
Pod	kube-system/kube-dns-696cb84c7-jwk9v			system-cluster-critical pod "kube-dns-696cb84c7-jwk9v" is pending
Pod	kube-system/kube-dns-autoscaler-55f8f75459-m2cm5	system-cluster-critical pod "kube-dns-autoscaler-55f8f75459-m2cm5" is pending

Validation Failed
W0523 06:25:57.503541    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 16 lines ...
Pod	kube-system/cilium-nfrh7				system-node-critical pod "cilium-nfrh7" is not ready (cilium-agent)
Pod	kube-system/cilium-nvd5v				system-node-critical pod "cilium-nvd5v" is pending
Pod	kube-system/cilium-vmkmw				system-node-critical pod "cilium-vmkmw" is not ready (cilium-agent)
Pod	kube-system/kube-dns-696cb84c7-jwk9v			system-cluster-critical pod "kube-dns-696cb84c7-jwk9v" is pending
Pod	kube-system/kube-dns-autoscaler-55f8f75459-m2cm5	system-cluster-critical pod "kube-dns-autoscaler-55f8f75459-m2cm5" is pending

Validation Failed
W0523 06:26:08.543593    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 15 lines ...
Pod	kube-system/cilium-nfrh7				system-node-critical pod "cilium-nfrh7" is not ready (cilium-agent)
Pod	kube-system/cilium-nvd5v				system-node-critical pod "cilium-nvd5v" is not ready (cilium-agent)
Pod	kube-system/cilium-vmkmw				system-node-critical pod "cilium-vmkmw" is not ready (cilium-agent)
Pod	kube-system/kube-dns-696cb84c7-jwk9v			system-cluster-critical pod "kube-dns-696cb84c7-jwk9v" is pending
Pod	kube-system/kube-dns-autoscaler-55f8f75459-m2cm5	system-cluster-critical pod "kube-dns-autoscaler-55f8f75459-m2cm5" is pending

Validation Failed
W0523 06:26:19.581119    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 10 lines ...
Pod	kube-system/cilium-7mb72		system-node-critical pod "cilium-7mb72" is not ready (cilium-agent)
Pod	kube-system/cilium-nfrh7		system-node-critical pod "cilium-nfrh7" is not ready (cilium-agent)
Pod	kube-system/cilium-nvd5v		system-node-critical pod "cilium-nvd5v" is not ready (cilium-agent)
Pod	kube-system/kube-dns-696cb84c7-jwk9v	system-cluster-critical pod "kube-dns-696cb84c7-jwk9v" is pending
Pod	kube-system/kube-dns-696cb84c7-lvbcn	system-cluster-critical pod "kube-dns-696cb84c7-lvbcn" is pending

Validation Failed
W0523 06:26:30.661337    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/cilium-nvd5v		system-node-critical pod "cilium-nvd5v" is not ready (cilium-agent)
Pod	kube-system/kube-dns-696cb84c7-lvbcn	system-cluster-critical pod "kube-dns-696cb84c7-lvbcn" is not ready (kubedns)

Validation Failed
W0523 06:26:41.584871    4072 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-ca-central-1a	Master	c5.large	1	1	ca-central-1a
nodes-ca-central-1a	Node	t3.medium	4	4	ca-central-1a

... skipping 1106 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:28:58.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
May 23 06:28:59.287: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [1.140 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:126

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 52 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:28:58.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
May 23 06:29:00.354: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-5845cbfd-485c-49ce-b58f-dd756af6bb25
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:00.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1663" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:00.589: INFO: Driver vsphere doesn't support ntfs -- skipping
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:00.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-2578" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":1,"skipped":12,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:00.749: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 99 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:28:59.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1" in namespace "downward-api-1629" to be "Succeeded or Failed"
May 23 06:28:59.388: INFO: Pod "downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1": Phase="Pending", Reason="", readiness=false. Elapsed: 72.461324ms
May 23 06:29:01.424: INFO: Pod "downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108286407s
May 23 06:29:03.458: INFO: Pod "downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142861309s
May 23 06:29:05.493: INFO: Pod "downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177468252s
STEP: Saw pod success
May 23 06:29:05.493: INFO: Pod "downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1" satisfied condition "Succeeded or Failed"
May 23 06:29:05.527: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1 container client-container: <nil>
STEP: delete the pod
May 23 06:29:05.625: INFO: Waiting for pod downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1 to disappear
May 23 06:29:05.660: INFO: Pod downwardapi-volume-be92959d-6f16-435e-adf1-c5c72e5897c1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:7.606 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:05.788: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:06.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7029" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:06.299: INFO: Only supported for providers [openstack] (not aws)
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when the NodeLease feature is enabled
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node_lease.go:112
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:06.996: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver gluster doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 45 lines ...
STEP: Building a namespace api object, basename container-runtime
May 23 06:28:58.357: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May 23 06:29:07.862: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:08.080: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 26 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:60
STEP: Creating a pod to test emptydir subpath on tmpfs
May 23 06:28:58.341: INFO: Waiting up to 5m0s for pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c" in namespace "emptydir-381" to be "Succeeded or Failed"
May 23 06:28:58.392: INFO: Pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.501624ms
May 23 06:29:00.426: INFO: Pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085085116s
May 23 06:29:02.461: INFO: Pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120087963s
May 23 06:29:04.502: INFO: Pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16065565s
May 23 06:29:06.538: INFO: Pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.197164918s
May 23 06:29:08.573: INFO: Pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.232153239s
STEP: Saw pod success
May 23 06:29:08.573: INFO: Pod "pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c" satisfied condition "Succeeded or Failed"
May 23 06:29:08.608: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c container test-container: <nil>
STEP: delete the pod
May 23 06:29:08.693: INFO: Waiting for pod pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c to disappear
May 23 06:29:08.727: INFO: Pod pod-bd5ad482-2ec3-489f-b97d-92e44eb0840c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:60
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:08.858: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 135 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:29:00.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e" in namespace "downward-api-5268" to be "Succeeded or Failed"
May 23 06:29:00.541: INFO: Pod "downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.46016ms
May 23 06:29:02.577: INFO: Pod "downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069536045s
May 23 06:29:04.652: INFO: Pod "downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144803468s
May 23 06:29:06.687: INFO: Pod "downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180189584s
May 23 06:29:08.722: INFO: Pod "downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.214960056s
STEP: Saw pod success
May 23 06:29:08.722: INFO: Pod "downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e" satisfied condition "Succeeded or Failed"
May 23 06:29:08.756: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e container client-container: <nil>
STEP: delete the pod
May 23 06:29:08.842: INFO: Waiting for pod downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e to disappear
May 23 06:29:08.876: INFO: Pod downwardapi-volume-96f5a4ef-0d14-4a82-b136-4cd2cc2ec29e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.753 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:09.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1351" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set","total":-1,"completed":2,"skipped":15,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:09.619: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 54 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May 23 06:29:10.522: INFO: Successfully updated pod "pod-update-activedeadlineseconds-15b8970d-0671-4da5-8f5e-46f4c60e76c1"
May 23 06:29:10.522: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-15b8970d-0671-4da5-8f5e-46f4c60e76c1" in namespace "pods-7367" to be "terminated due to deadline exceeded"
May 23 06:29:10.556: INFO: Pod "pod-update-activedeadlineseconds-15b8970d-0671-4da5-8f5e-46f4c60e76c1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 33.729225ms
May 23 06:29:10.556: INFO: Pod "pod-update-activedeadlineseconds-15b8970d-0671-4da5-8f5e-46f4c60e76c1" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:10.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7367" for this suite.


• [SLOW TEST:12.471 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 3 lines ...
STEP: Building a namespace api object, basename provisioning
May 23 06:28:58.902: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
May 23 06:28:58.978: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:28:59.086: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2554" in namespace "provisioning-2554" to be "Succeeded or Failed"
May 23 06:28:59.125: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Pending", Reason="", readiness=false. Elapsed: 39.15159ms
May 23 06:29:01.161: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074870499s
May 23 06:29:03.194: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10849758s
May 23 06:29:05.228: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142379653s
May 23 06:29:07.262: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176045113s
May 23 06:29:09.295: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.209818097s
STEP: Saw pod success
May 23 06:29:09.296: INFO: Pod "hostpath-symlink-prep-provisioning-2554" satisfied condition "Succeeded or Failed"
May 23 06:29:09.296: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2554" in namespace "provisioning-2554"
May 23 06:29:09.334: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2554" to be fully deleted
May 23 06:29:09.367: INFO: Creating resource for inline volume
May 23 06:29:09.367: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
May 23 06:29:09.368: INFO: Deleting pod "pod-subpath-test-inlinevolume-cwzq" in namespace "provisioning-2554"
May 23 06:29:09.451: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2554" in namespace "provisioning-2554" to be "Succeeded or Failed"
May 23 06:29:09.487: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Pending", Reason="", readiness=false. Elapsed: 35.481125ms
May 23 06:29:11.522: INFO: Pod "hostpath-symlink-prep-provisioning-2554": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070677976s
STEP: Saw pod success
May 23 06:29:11.522: INFO: Pod "hostpath-symlink-prep-provisioning-2554" satisfied condition "Succeeded or Failed"
May 23 06:29:11.522: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2554" in namespace "provisioning-2554"
May 23 06:29:11.561: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2554" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:11.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2554" for this suite.
... skipping 21 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:29:02.578: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80" in namespace "projected-7996" to be "Succeeded or Failed"
May 23 06:29:02.614: INFO: Pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80": Phase="Pending", Reason="", readiness=false. Elapsed: 36.742904ms
May 23 06:29:04.651: INFO: Pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073686563s
May 23 06:29:06.687: INFO: Pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109685685s
May 23 06:29:08.722: INFO: Pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144392029s
May 23 06:29:10.757: INFO: Pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179258821s
May 23 06:29:12.792: INFO: Pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214512568s
STEP: Saw pod success
May 23 06:29:12.792: INFO: Pod "downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80" satisfied condition "Succeeded or Failed"
May 23 06:29:12.827: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80 container client-container: <nil>
STEP: delete the pod
May 23 06:29:12.916: INFO: Waiting for pod downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80 to disappear
May 23 06:29:12.951: INFO: Pod downwardapi-volume-2e71d8db-4db1-4beb-9910-0a2844ba6b80 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.747 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:13.037: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 135 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:92
STEP: Creating a pod to test downward API volume plugin
May 23 06:29:11.897: INFO: Waiting up to 5m0s for pod "metadata-volume-119f418c-dc33-4730-9bd8-08c8678ec4ec" in namespace "downward-api-3830" to be "Succeeded or Failed"
May 23 06:29:11.931: INFO: Pod "metadata-volume-119f418c-dc33-4730-9bd8-08c8678ec4ec": Phase="Pending", Reason="", readiness=false. Elapsed: 33.602462ms
May 23 06:29:13.964: INFO: Pod "metadata-volume-119f418c-dc33-4730-9bd8-08c8678ec4ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.067202686s
STEP: Saw pod success
May 23 06:29:13.964: INFO: Pod "metadata-volume-119f418c-dc33-4730-9bd8-08c8678ec4ec" satisfied condition "Succeeded or Failed"
May 23 06:29:13.998: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod metadata-volume-119f418c-dc33-4730-9bd8-08c8678ec4ec container client-container: <nil>
STEP: delete the pod
May 23 06:29:14.077: INFO: Waiting for pod metadata-volume-119f418c-dc33-4730-9bd8-08c8678ec4ec to disappear
May 23 06:29:14.112: INFO: Pod metadata-volume-119f418c-dc33-4730-9bd8-08c8678ec4ec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:14.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3830" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:14.192: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:14.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-9316" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated","total":-1,"completed":3,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:14.232: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-916b9a2e-77fe-4d48-8695-ec0a64dca9ec
STEP: Creating a pod to test consume secrets
May 23 06:29:01.859: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d" in namespace "projected-565" to be "Succeeded or Failed"
May 23 06:29:01.894: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.807513ms
May 23 06:29:03.930: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070098914s
May 23 06:29:05.968: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108513049s
May 23 06:29:08.003: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143564433s
May 23 06:29:10.038: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178447968s
May 23 06:29:12.074: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.21430301s
May 23 06:29:14.110: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.250990379s
STEP: Saw pod success
May 23 06:29:14.111: INFO: Pod "pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d" satisfied condition "Succeeded or Failed"
May 23 06:29:14.145: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d container secret-volume-test: <nil>
STEP: delete the pod
May 23 06:29:14.226: INFO: Waiting for pod pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d to disappear
May 23 06:29:14.260: INFO: Pod pod-projected-secrets-d271300f-0c34-4594-a2df-9f86dbeaab0d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:13.526 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:94
May 23 06:29:14.344: INFO: Driver "nfs" does not support block volume mode - skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
... skipping 83 lines ...
• [SLOW TEST:8.612 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:14.950: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 28 lines ...
• [SLOW TEST:18.603 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:16.731: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
STEP: Building a namespace api object, basename pvc-protection
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:71
May 23 06:29:16.958: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
STEP: Creating a PVC
May 23 06:29:17.048: INFO: error finding default storageClass : No default storage class found
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:17.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pvc-protection-6504" for this suite.
[AfterEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:106
... skipping 2 lines ...
S [SKIPPING] in Spec Setup (BeforeEach) [0.404 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:143

  error finding default storageClass : No default storage class found

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:825
------------------------------
SSSS
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
... skipping 28 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:18.407: INFO: Driver local doesn't support ext4 -- skipping
... skipping 90 lines ...
• [SLOW TEST:20.806 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:19.045: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:19.082: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 219 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:388
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":4,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:19.836: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
May 23 06:29:15.135: INFO: PersistentVolumeClaim pvc-862cx found but phase is Pending instead of Bound.
May 23 06:29:17.171: INFO: PersistentVolumeClaim pvc-862cx found and phase=Bound (10.211683635s)
May 23 06:29:17.171: INFO: Waiting up to 3m0s for PersistentVolume local-6njnf to have phase Bound
May 23 06:29:17.210: INFO: PersistentVolume local-6njnf found and phase=Bound (39.817008ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-cvfp
STEP: Creating a pod to test exec-volume-test
May 23 06:29:17.323: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-cvfp" in namespace "volume-5969" to be "Succeeded or Failed"
May 23 06:29:17.356: INFO: Pod "exec-volume-test-preprovisionedpv-cvfp": Phase="Pending", Reason="", readiness=false. Elapsed: 33.414748ms
May 23 06:29:19.392: INFO: Pod "exec-volume-test-preprovisionedpv-cvfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068670165s
May 23 06:29:21.425: INFO: Pod "exec-volume-test-preprovisionedpv-cvfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102458999s
STEP: Saw pod success
May 23 06:29:21.426: INFO: Pod "exec-volume-test-preprovisionedpv-cvfp" satisfied condition "Succeeded or Failed"
May 23 06:29:21.460: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-cvfp container exec-container-preprovisionedpv-cvfp: <nil>
STEP: delete the pod
May 23 06:29:21.544: INFO: Waiting for pod exec-volume-test-preprovisionedpv-cvfp to disappear
May 23 06:29:21.578: INFO: Pod exec-volume-test-preprovisionedpv-cvfp no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-cvfp
May 23 06:29:21.578: INFO: Deleting pod "exec-volume-test-preprovisionedpv-cvfp" in namespace "volume-5969"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:22.101: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 95 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1303
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:28:58.512: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
May 23 06:29:14.199: INFO: PersistentVolumeClaim pvc-vdqrj found but phase is Pending instead of Bound.
May 23 06:29:16.233: INFO: PersistentVolumeClaim pvc-vdqrj found and phase=Bound (8.171269762s)
May 23 06:29:16.233: INFO: Waiting up to 3m0s for PersistentVolume local-rv876 to have phase Bound
May 23 06:29:16.267: INFO: PersistentVolume local-rv876 found and phase=Bound (34.04305ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-sqgg
STEP: Creating a pod to test exec-volume-test
May 23 06:29:16.378: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-sqgg" in namespace "volume-6953" to be "Succeeded or Failed"
May 23 06:29:16.412: INFO: Pod "exec-volume-test-preprovisionedpv-sqgg": Phase="Pending", Reason="", readiness=false. Elapsed: 33.967781ms
May 23 06:29:18.447: INFO: Pod "exec-volume-test-preprovisionedpv-sqgg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068742819s
May 23 06:29:20.481: INFO: Pod "exec-volume-test-preprovisionedpv-sqgg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103229592s
STEP: Saw pod success
May 23 06:29:20.481: INFO: Pod "exec-volume-test-preprovisionedpv-sqgg" satisfied condition "Succeeded or Failed"
May 23 06:29:20.515: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-sqgg container exec-container-preprovisionedpv-sqgg: <nil>
STEP: delete the pod
May 23 06:29:20.609: INFO: Waiting for pod exec-volume-test-preprovisionedpv-sqgg to disappear
May 23 06:29:20.643: INFO: Pod exec-volume-test-preprovisionedpv-sqgg no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-sqgg
May 23 06:29:20.643: INFO: Deleting pod "exec-volume-test-preprovisionedpv-sqgg" in namespace "volume-6953"
... skipping 50 lines ...
May 23 06:29:13.274: INFO: PersistentVolumeClaim pvc-ss7rj found but phase is Pending instead of Bound.
May 23 06:29:15.387: INFO: PersistentVolumeClaim pvc-ss7rj found and phase=Bound (6.215693233s)
May 23 06:29:15.387: INFO: Waiting up to 3m0s for PersistentVolume local-tbpvl to have phase Bound
May 23 06:29:15.456: INFO: PersistentVolume local-tbpvl found and phase=Bound (69.064093ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ntcx
STEP: Creating a pod to test subpath
May 23 06:29:15.578: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ntcx" in namespace "provisioning-5723" to be "Succeeded or Failed"
May 23 06:29:15.614: INFO: Pod "pod-subpath-test-preprovisionedpv-ntcx": Phase="Pending", Reason="", readiness=false. Elapsed: 35.944988ms
May 23 06:29:17.649: INFO: Pod "pod-subpath-test-preprovisionedpv-ntcx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070444412s
May 23 06:29:19.683: INFO: Pod "pod-subpath-test-preprovisionedpv-ntcx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105016547s
May 23 06:29:21.718: INFO: Pod "pod-subpath-test-preprovisionedpv-ntcx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139658048s
STEP: Saw pod success
May 23 06:29:21.718: INFO: Pod "pod-subpath-test-preprovisionedpv-ntcx" satisfied condition "Succeeded or Failed"
May 23 06:29:21.752: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-ntcx container test-container-volume-preprovisionedpv-ntcx: <nil>
STEP: delete the pod
May 23 06:29:21.848: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ntcx to disappear
May 23 06:29:21.887: INFO: Pod pod-subpath-test-preprovisionedpv-ntcx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ntcx
May 23 06:29:21.887: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ntcx" in namespace "provisioning-5723"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:22.753: INFO: Only supported for providers [azure] (not aws)
... skipping 116 lines ...
• [SLOW TEST:19.724 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:26.785: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 155 lines ...
STEP: creating an object not containing a namespace with in-cluster config
May 23 06:29:23.071: INFO: Running '/tmp/kubectl3905167308/kubectl --server=https://api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8065 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
May 23 06:29:23.952: INFO: rc: 255
STEP: trying to use kubectl with invalid token
May 23 06:29:23.952: INFO: Running '/tmp/kubectl3905167308/kubectl --server=https://api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8065 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
May 23 06:29:24.515: INFO: rc: 255
May 23 06:29:24.515: INFO: got err error running /tmp/kubectl3905167308/kubectl --server=https://api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8065 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0523 06:29:24.463064     170 merged_client_builder.go:163] Using in-cluster namespace
I0523 06:29:24.463275     170 merged_client_builder.go:121] Using in-cluster configuration
I0523 06:29:24.465806     170 merged_client_builder.go:121] Using in-cluster configuration
I0523 06:29:24.471321     170 merged_client_builder.go:121] Using in-cluster configuration
I0523 06:29:24.471670     170 round_trippers.go:421] GET https://100.64.0.1:443/api/v1/namespaces/kubectl-8065/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0523 06:29:24.477841     170 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0002ee000, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d03b80, 0xc000000003, 0x0, 0x0, 0xc00015cc40, 0x2ae3039, 0xa, 0x73, 0x40b300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d03b80, 0x3, 0x0, 0x0, 0x2, 0xc0009e5ac8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00055e600, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e5d900, 0xc0004c8220, 0x1d06430)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8b5
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc00032d8c0, 0xc00042ac30, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
May 23 06:29:24.515: INFO: Running '/tmp/kubectl3905167308/kubectl --server=https://api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8065 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
May 23 06:29:25.142: INFO: rc: 255
May 23 06:29:25.142: INFO: got err error running /tmp/kubectl3905167308/kubectl --server=https://api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8065 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0523 06:29:25.004059     179 merged_client_builder.go:163] Using in-cluster namespace
I0523 06:29:25.028842     179 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 24 milliseconds
I0523 06:29:25.028924     179 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0523 06:29:25.055941     179 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 26 milliseconds
I0523 06:29:25.056027     179 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0523 06:29:25.056045     179 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0523 06:29:25.079921     179 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 23 milliseconds
I0523 06:29:25.079981     179 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0523 06:29:25.087432     179 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 7 milliseconds
I0523 06:29:25.087488     179 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0523 06:29:25.105710     179 round_trippers.go:444] GET http://invalid/api?timeout=32s  in 18 milliseconds
I0523 06:29:25.105773     179 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 100.64.0.10:53: no such host
I0523 06:29:25.105805     179 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 100.64.0.10:53: no such host
F0523 06:29:25.105820     179 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 100.64.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000ae001, 0xc0002c81c0, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d03b80, 0xc000000003, 0x0, 0x0, 0xc000452150, 0x2ae3039, 0xa, 0x73, 0x40b300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d03b80, 0x3, 0x0, 0x0, 0x2, 0xc00056fac8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000089c80, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e5cca0, 0xc000347d70, 0x1d06430)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc00046bb80, 0xc000511080, 0x1, 0x3)
... skipping 30 lines ...
	/usr/local/go/src/net/http/client.go:397 +0x337

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
May 23 06:29:25.142: INFO: Running '/tmp/kubectl3905167308/kubectl --server=https://api.e2e-04c2e2e3dd-ff2a0.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-8065 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
May 23 06:29:25.702: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
May 23 06:29:25.702: INFO: stdout: "I0523 06:29:25.646065     189 merged_client_builder.go:121] Using in-cluster configuration\nI0523 06:29:25.655158     189 merged_client_builder.go:121] Using in-cluster configuration\nI0523 06:29:25.660270     189 merged_client_builder.go:121] Using in-cluster configuration\nI0523 06:29:25.666130     189 round_trippers.go:444] GET https://100.64.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 5 milliseconds\nNo resources found in invalid namespace.\n"
May 23 06:29:25.702: INFO: stdout: I0523 06:29:25.646065     189 merged_client_builder.go:121] Using in-cluster configuration
... skipping 72 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:635
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:27.285: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 59 lines ...
May 23 06:29:15.090: INFO: PersistentVolumeClaim pvc-tw7gf found but phase is Pending instead of Bound.
May 23 06:29:17.136: INFO: PersistentVolumeClaim pvc-tw7gf found and phase=Bound (10.220843525s)
May 23 06:29:17.136: INFO: Waiting up to 3m0s for PersistentVolume local-jclv2 to have phase Bound
May 23 06:29:17.171: INFO: PersistentVolume local-jclv2 found and phase=Bound (34.938697ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8j9t
STEP: Creating a pod to test subpath
May 23 06:29:17.301: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8j9t" in namespace "provisioning-8903" to be "Succeeded or Failed"
May 23 06:29:17.337: INFO: Pod "pod-subpath-test-preprovisionedpv-8j9t": Phase="Pending", Reason="", readiness=false. Elapsed: 35.650478ms
May 23 06:29:19.373: INFO: Pod "pod-subpath-test-preprovisionedpv-8j9t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072198964s
May 23 06:29:21.408: INFO: Pod "pod-subpath-test-preprovisionedpv-8j9t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106833753s
May 23 06:29:23.443: INFO: Pod "pod-subpath-test-preprovisionedpv-8j9t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14220175s
May 23 06:29:25.478: INFO: Pod "pod-subpath-test-preprovisionedpv-8j9t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177142782s
May 23 06:29:27.513: INFO: Pod "pod-subpath-test-preprovisionedpv-8j9t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.211735873s
STEP: Saw pod success
May 23 06:29:27.513: INFO: Pod "pod-subpath-test-preprovisionedpv-8j9t" satisfied condition "Succeeded or Failed"
May 23 06:29:27.548: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-8j9t container test-container-subpath-preprovisionedpv-8j9t: <nil>
STEP: delete the pod
May 23 06:29:27.625: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8j9t to disappear
May 23 06:29:27.659: INFO: Pod pod-subpath-test-preprovisionedpv-8j9t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8j9t
May 23 06:29:27.659: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8j9t" in namespace "provisioning-8903"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:28.275: INFO: Driver windows-gcepd doesn't support ext3 -- skipping
... skipping 287 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:28.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-569" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:29.339: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 122 lines ...
• [SLOW TEST:10.795 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:11.055 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:30.929: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 3 lines ...
May 23 06:29:22.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 23 06:29:22.356: INFO: Waiting up to 5m0s for pod "downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184" in namespace "downward-api-5667" to be "Succeeded or Failed"
May 23 06:29:22.390: INFO: Pod "downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184": Phase="Pending", Reason="", readiness=false. Elapsed: 33.1649ms
May 23 06:29:24.423: INFO: Pod "downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066990462s
May 23 06:29:26.457: INFO: Pod "downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100532642s
May 23 06:29:28.491: INFO: Pod "downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134114712s
May 23 06:29:30.573: INFO: Pod "downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.216877813s
STEP: Saw pod success
May 23 06:29:30.573: INFO: Pod "downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184" satisfied condition "Succeeded or Failed"
May 23 06:29:30.658: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184 container dapi-container: <nil>
STEP: delete the pod
May 23 06:29:30.812: INFO: Waiting for pod downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184 to disappear
May 23 06:29:30.853: INFO: Pod downward-api-f74b52b7-9126-4edb-b203-9361ee0d3184 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 152 lines ...
• [SLOW TEST:16.796 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:35.243: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":28,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:29:34.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 224 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:37.482: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 21 lines ...
May 23 06:29:29.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 23 06:29:29.275: INFO: Waiting up to 5m0s for pod "downward-api-5791219d-7920-49e9-9ded-7d65fd71062a" in namespace "downward-api-7596" to be "Succeeded or Failed"
May 23 06:29:29.309: INFO: Pod "downward-api-5791219d-7920-49e9-9ded-7d65fd71062a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.516919ms
May 23 06:29:31.370: INFO: Pod "downward-api-5791219d-7920-49e9-9ded-7d65fd71062a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094829042s
May 23 06:29:33.406: INFO: Pod "downward-api-5791219d-7920-49e9-9ded-7d65fd71062a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130761417s
May 23 06:29:35.441: INFO: Pod "downward-api-5791219d-7920-49e9-9ded-7d65fd71062a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165627517s
May 23 06:29:37.475: INFO: Pod "downward-api-5791219d-7920-49e9-9ded-7d65fd71062a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.200306386s
STEP: Saw pod success
May 23 06:29:37.475: INFO: Pod "downward-api-5791219d-7920-49e9-9ded-7d65fd71062a" satisfied condition "Succeeded or Failed"
May 23 06:29:37.510: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod downward-api-5791219d-7920-49e9-9ded-7d65fd71062a container dapi-container: <nil>
STEP: delete the pod
May 23 06:29:37.586: INFO: Waiting for pod downward-api-5791219d-7920-49e9-9ded-7d65fd71062a to disappear
May 23 06:29:37.620: INFO: Pod downward-api-5791219d-7920-49e9-9ded-7d65fd71062a no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.630 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:8.356 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:48
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":3,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:10.703 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:38.023: INFO: Driver hostPath doesn't support ext4 -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-506e67d9-2050-4dbc-be2c-b07ce64eb48c
STEP: Creating a pod to test consume secrets
May 23 06:29:30.141: INFO: Waiting up to 5m0s for pod "pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef" in namespace "secrets-4811" to be "Succeeded or Failed"
May 23 06:29:30.175: INFO: Pod "pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef": Phase="Pending", Reason="", readiness=false. Elapsed: 34.185726ms
May 23 06:29:32.211: INFO: Pod "pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070125967s
May 23 06:29:34.247: INFO: Pod "pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105933584s
May 23 06:29:36.283: INFO: Pod "pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142348029s
May 23 06:29:38.318: INFO: Pod "pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177128142s
STEP: Saw pod success
May 23 06:29:38.318: INFO: Pod "pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef" satisfied condition "Succeeded or Failed"
May 23 06:29:38.352: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef container secret-volume-test: <nil>
STEP: delete the pod
May 23 06:29:38.431: INFO: Waiting for pod pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef to disappear
May 23 06:29:38.466: INFO: Pod pod-secrets-72820443-ccb8-4881-bbde-31f769fb2bef no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.641 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:29:07.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename conntrack
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
• [SLOW TEST:31.403 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:199
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":2,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:38.975: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:39.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9832" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:39.122: INFO: Only supported for providers [openstack] (not aws)
... skipping 83 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:29:30.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:68
STEP: Creating a pod to test emptydir volume type on node default medium
May 23 06:29:31.145: INFO: Waiting up to 5m0s for pod "pod-e6bdd326-4686-42b7-b7d0-7414e16b9427" in namespace "emptydir-8597" to be "Succeeded or Failed"
May 23 06:29:31.179: INFO: Pod "pod-e6bdd326-4686-42b7-b7d0-7414e16b9427": Phase="Pending", Reason="", readiness=false. Elapsed: 33.834633ms
May 23 06:29:33.214: INFO: Pod "pod-e6bdd326-4686-42b7-b7d0-7414e16b9427": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068017816s
May 23 06:29:35.250: INFO: Pod "pod-e6bdd326-4686-42b7-b7d0-7414e16b9427": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104220493s
May 23 06:29:37.283: INFO: Pod "pod-e6bdd326-4686-42b7-b7d0-7414e16b9427": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137745981s
May 23 06:29:39.317: INFO: Pod "pod-e6bdd326-4686-42b7-b7d0-7414e16b9427": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171839335s
STEP: Saw pod success
May 23 06:29:39.317: INFO: Pod "pod-e6bdd326-4686-42b7-b7d0-7414e16b9427" satisfied condition "Succeeded or Failed"
May 23 06:29:39.351: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-e6bdd326-4686-42b7-b7d0-7414e16b9427 container test-container: <nil>
STEP: delete the pod
May 23 06:29:39.436: INFO: Waiting for pod pod-e6bdd326-4686-42b7-b7d0-7414e16b9427 to disappear
May 23 06:29:39.469: INFO: Pod pod-e6bdd326-4686-42b7-b7d0-7414e16b9427 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:68
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:39.549: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:94
May 23 06:29:35.468: INFO: Waiting up to 5m0s for pod "busybox-user-0-208d8bcf-86be-47ba-a8c7-b140885d882e" in namespace "security-context-test-9855" to be "Succeeded or Failed"
May 23 06:29:35.504: INFO: Pod "busybox-user-0-208d8bcf-86be-47ba-a8c7-b140885d882e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.022528ms
May 23 06:29:37.539: INFO: Pod "busybox-user-0-208d8bcf-86be-47ba-a8c7-b140885d882e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070245384s
May 23 06:29:39.574: INFO: Pod "busybox-user-0-208d8bcf-86be-47ba-a8c7-b140885d882e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105977509s
May 23 06:29:39.575: INFO: Pod "busybox-user-0-208d8bcf-86be-47ba-a8c7-b140885d882e" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:39.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9855" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:26.646 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:920
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":4,"skipped":22,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:41.667: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 117 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:752
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:805
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:29:38.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
May 23 06:29:38.252: INFO: Waiting up to 5m0s for pod "pod-84662627-fd73-4852-b78a-8cd96d45f20b" in namespace "emptydir-4391" to be "Succeeded or Failed"
May 23 06:29:38.287: INFO: Pod "pod-84662627-fd73-4852-b78a-8cd96d45f20b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.577976ms
May 23 06:29:40.322: INFO: Pod "pod-84662627-fd73-4852-b78a-8cd96d45f20b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069331568s
May 23 06:29:42.367: INFO: Pod "pod-84662627-fd73-4852-b78a-8cd96d45f20b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114211209s
May 23 06:29:44.401: INFO: Pod "pod-84662627-fd73-4852-b78a-8cd96d45f20b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149007896s
STEP: Saw pod success
May 23 06:29:44.401: INFO: Pod "pod-84662627-fd73-4852-b78a-8cd96d45f20b" satisfied condition "Succeeded or Failed"
May 23 06:29:44.436: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-84662627-fd73-4852-b78a-8cd96d45f20b container test-container: <nil>
STEP: delete the pod
May 23 06:29:44.515: INFO: Waiting for pod pod-84662627-fd73-4852-b78a-8cd96d45f20b to disappear
May 23 06:29:44.550: INFO: Pod pod-84662627-fd73-4852-b78a-8cd96d45f20b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.580 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 50 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:45.979: INFO: Distro debian doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:46.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4447" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:46.416: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 40 lines ...
May 23 06:29:43.773: INFO: PersistentVolumeClaim pvc-vdgh6 found but phase is Pending instead of Bound.
May 23 06:29:45.807: INFO: PersistentVolumeClaim pvc-vdgh6 found and phase=Bound (12.26943381s)
May 23 06:29:45.807: INFO: Waiting up to 3m0s for PersistentVolume local-lc47l to have phase Bound
May 23 06:29:45.840: INFO: PersistentVolume local-lc47l found and phase=Bound (33.096524ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7w8d
STEP: Creating a pod to test subpath
May 23 06:29:45.942: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7w8d" in namespace "provisioning-7252" to be "Succeeded or Failed"
May 23 06:29:45.976: INFO: Pod "pod-subpath-test-preprovisionedpv-7w8d": Phase="Pending", Reason="", readiness=false. Elapsed: 33.559956ms
May 23 06:29:48.010: INFO: Pod "pod-subpath-test-preprovisionedpv-7w8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067515567s
May 23 06:29:50.044: INFO: Pod "pod-subpath-test-preprovisionedpv-7w8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101398898s
May 23 06:29:52.078: INFO: Pod "pod-subpath-test-preprovisionedpv-7w8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135290549s
May 23 06:29:54.114: INFO: Pod "pod-subpath-test-preprovisionedpv-7w8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171941036s
STEP: Saw pod success
May 23 06:29:54.114: INFO: Pod "pod-subpath-test-preprovisionedpv-7w8d" satisfied condition "Succeeded or Failed"
May 23 06:29:54.148: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-7w8d container test-container-volume-preprovisionedpv-7w8d: <nil>
STEP: delete the pod
May 23 06:29:54.233: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7w8d to disappear
May 23 06:29:54.267: INFO: Pod pod-subpath-test-preprovisionedpv-7w8d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7w8d
May 23 06:29:54.267: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7w8d" in namespace "provisioning-7252"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:54.831: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 170 lines ...
May 23 06:29:43.874: INFO: PersistentVolumeClaim pvc-58c66 found but phase is Pending instead of Bound.
May 23 06:29:45.908: INFO: PersistentVolumeClaim pvc-58c66 found and phase=Bound (14.296003424s)
May 23 06:29:45.908: INFO: Waiting up to 3m0s for PersistentVolume local-6mn94 to have phase Bound
May 23 06:29:45.946: INFO: PersistentVolume local-6mn94 found and phase=Bound (37.55295ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sm4q
STEP: Creating a pod to test subpath
May 23 06:29:46.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sm4q" in namespace "provisioning-4878" to be "Succeeded or Failed"
May 23 06:29:46.084: INFO: Pod "pod-subpath-test-preprovisionedpv-sm4q": Phase="Pending", Reason="", readiness=false. Elapsed: 33.829679ms
May 23 06:29:48.127: INFO: Pod "pod-subpath-test-preprovisionedpv-sm4q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077031503s
May 23 06:29:50.162: INFO: Pod "pod-subpath-test-preprovisionedpv-sm4q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111334116s
May 23 06:29:52.197: INFO: Pod "pod-subpath-test-preprovisionedpv-sm4q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146596526s
May 23 06:29:54.233: INFO: Pod "pod-subpath-test-preprovisionedpv-sm4q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.182858343s
STEP: Saw pod success
May 23 06:29:54.233: INFO: Pod "pod-subpath-test-preprovisionedpv-sm4q" satisfied condition "Succeeded or Failed"
May 23 06:29:54.268: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-sm4q container test-container-subpath-preprovisionedpv-sm4q: <nil>
STEP: delete the pod
May 23 06:29:54.358: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sm4q to disappear
May 23 06:29:54.392: INFO: Pod pod-subpath-test-preprovisionedpv-sm4q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sm4q
May 23 06:29:54.392: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sm4q" in namespace "provisioning-4878"
... skipping 192 lines ...
May 23 06:29:34.324: INFO: PersistentVolume nfs-hwxcz found and phase=Bound (35.02337ms)
May 23 06:29:34.360: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-tf76d] to have phase Bound
May 23 06:29:34.395: INFO: PersistentVolumeClaim pvc-tf76d found and phase=Bound (34.950686ms)
STEP: Checking pod has write access to PersistentVolumes
May 23 06:29:34.430: INFO: Creating nfs test pod
May 23 06:29:34.469: INFO: Pod should terminate with exitcode 0 (success)
May 23 06:29:34.469: INFO: Waiting up to 5m0s for pod "pvc-tester-cfpnl" in namespace "pv-1938" to be "Succeeded or Failed"
May 23 06:29:34.504: INFO: Pod "pvc-tester-cfpnl": Phase="Pending", Reason="", readiness=false. Elapsed: 34.40775ms
May 23 06:29:36.546: INFO: Pod "pvc-tester-cfpnl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076977218s
May 23 06:29:38.581: INFO: Pod "pvc-tester-cfpnl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111488993s
STEP: Saw pod success
May 23 06:29:38.581: INFO: Pod "pvc-tester-cfpnl" satisfied condition "Succeeded or Failed"
May 23 06:29:38.581: INFO: Pod pvc-tester-cfpnl succeeded 
May 23 06:29:38.581: INFO: Deleting pod "pvc-tester-cfpnl" in namespace "pv-1938"
May 23 06:29:38.620: INFO: Wait up to 5m0s for pod "pvc-tester-cfpnl" to be fully deleted
May 23 06:29:38.688: INFO: Creating nfs test pod
May 23 06:29:38.723: INFO: Pod should terminate with exitcode 0 (success)
May 23 06:29:38.723: INFO: Waiting up to 5m0s for pod "pvc-tester-rqt7c" in namespace "pv-1938" to be "Succeeded or Failed"
May 23 06:29:38.757: INFO: Pod "pvc-tester-rqt7c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.943279ms
May 23 06:29:40.794: INFO: Pod "pvc-tester-rqt7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071459911s
May 23 06:29:42.830: INFO: Pod "pvc-tester-rqt7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107426479s
May 23 06:29:44.865: INFO: Pod "pvc-tester-rqt7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142026663s
May 23 06:29:46.899: INFO: Pod "pvc-tester-rqt7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176714231s
STEP: Saw pod success
May 23 06:29:46.899: INFO: Pod "pvc-tester-rqt7c" satisfied condition "Succeeded or Failed"
May 23 06:29:46.899: INFO: Pod pvc-tester-rqt7c succeeded 
May 23 06:29:46.899: INFO: Deleting pod "pvc-tester-rqt7c" in namespace "pv-1938"
May 23 06:29:46.938: INFO: Wait up to 5m0s for pod "pvc-tester-rqt7c" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
May 23 06:29:47.109: INFO: Deleting PVC pvc-4gjp7 to trigger reclamation of PV nfs-tdczm
May 23 06:29:47.109: INFO: Deleting PersistentVolumeClaim "pvc-4gjp7"
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:57.729: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1303
------------------------------
... skipping 63 lines ...
• [SLOW TEST:60.384 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:58.484: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:29:46.215: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6" in namespace "projected-1920" to be "Succeeded or Failed"
May 23 06:29:46.248: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.340665ms
May 23 06:29:48.282: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067016399s
May 23 06:29:50.315: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100725885s
May 23 06:29:52.350: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135781869s
May 23 06:29:54.385: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170800769s
May 23 06:29:56.419: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204414372s
May 23 06:29:58.453: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.238396064s
STEP: Saw pod success
May 23 06:29:58.453: INFO: Pod "downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6" satisfied condition "Succeeded or Failed"
May 23 06:29:58.486: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6 container client-container: <nil>
STEP: delete the pod
May 23 06:29:58.566: INFO: Waiting for pod downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6 to disappear
May 23 06:29:58.609: INFO: Pod downwardapi-volume-29f1b0ab-e6e3-471c-aa0f-cc017234dac6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:12.677 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:58.703: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 70 lines ...
• [SLOW TEST:20.954 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:58.767: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 104 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:29:58.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1667" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":3,"skipped":17,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 54 lines ...
• [SLOW TEST:13.483 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":4,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:29:59.959: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 46 lines ...
May 23 06:29:44.026: INFO: PersistentVolumeClaim pvc-jrhjl found but phase is Pending instead of Bound.
May 23 06:29:46.062: INFO: PersistentVolumeClaim pvc-jrhjl found and phase=Bound (8.178426314s)
May 23 06:29:46.062: INFO: Waiting up to 3m0s for PersistentVolume local-ffbrw to have phase Bound
May 23 06:29:46.096: INFO: PersistentVolume local-ffbrw found and phase=Bound (34.051763ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sxsv
STEP: Creating a pod to test subpath
May 23 06:29:46.204: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sxsv" in namespace "provisioning-7815" to be "Succeeded or Failed"
May 23 06:29:46.238: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv": Phase="Pending", Reason="", readiness=false. Elapsed: 34.075599ms
May 23 06:29:48.273: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068658209s
May 23 06:29:50.308: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103446009s
May 23 06:29:52.342: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138321125s
May 23 06:29:54.377: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173361887s
May 23 06:29:56.412: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207895734s
May 23 06:29:58.449: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.244433735s
STEP: Saw pod success
May 23 06:29:58.449: INFO: Pod "pod-subpath-test-preprovisionedpv-sxsv" satisfied condition "Succeeded or Failed"
May 23 06:29:58.483: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-sxsv container test-container-volume-preprovisionedpv-sxsv: <nil>
STEP: delete the pod
May 23 06:29:58.563: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sxsv to disappear
May 23 06:29:58.603: INFO: Pod pod-subpath-test-preprovisionedpv-sxsv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sxsv
May 23 06:29:58.603: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sxsv" in namespace "provisioning-7815"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
May 23 06:29:39.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
May 23 06:29:39.848: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:29:39.922: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9040" in namespace "provisioning-9040" to be "Succeeded or Failed"
May 23 06:29:39.958: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Pending", Reason="", readiness=false. Elapsed: 35.717285ms
May 23 06:29:42.006: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084445717s
May 23 06:29:44.045: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122727739s
May 23 06:29:46.082: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160371196s
STEP: Saw pod success
May 23 06:29:46.082: INFO: Pod "hostpath-symlink-prep-provisioning-9040" satisfied condition "Succeeded or Failed"
May 23 06:29:46.082: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9040" in namespace "provisioning-9040"
May 23 06:29:46.135: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9040" to be fully deleted
May 23 06:29:46.172: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xml8
STEP: Creating a pod to test subpath
May 23 06:29:46.215: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xml8" in namespace "provisioning-9040" to be "Succeeded or Failed"
May 23 06:29:46.250: INFO: Pod "pod-subpath-test-inlinevolume-xml8": Phase="Pending", Reason="", readiness=false. Elapsed: 34.710133ms
May 23 06:29:48.285: INFO: Pod "pod-subpath-test-inlinevolume-xml8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069861509s
May 23 06:29:50.320: INFO: Pod "pod-subpath-test-inlinevolume-xml8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105200555s
May 23 06:29:52.359: INFO: Pod "pod-subpath-test-inlinevolume-xml8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143549973s
STEP: Saw pod success
May 23 06:29:52.359: INFO: Pod "pod-subpath-test-inlinevolume-xml8" satisfied condition "Succeeded or Failed"
May 23 06:29:52.394: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-xml8 container test-container-volume-inlinevolume-xml8: <nil>
STEP: delete the pod
May 23 06:29:52.475: INFO: Waiting for pod pod-subpath-test-inlinevolume-xml8 to disappear
May 23 06:29:52.510: INFO: Pod pod-subpath-test-inlinevolume-xml8 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-xml8
May 23 06:29:52.510: INFO: Deleting pod "pod-subpath-test-inlinevolume-xml8" in namespace "provisioning-9040"
STEP: Deleting pod
May 23 06:29:52.550: INFO: Deleting pod "pod-subpath-test-inlinevolume-xml8" in namespace "provisioning-9040"
May 23 06:29:52.622: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9040" in namespace "provisioning-9040" to be "Succeeded or Failed"
May 23 06:29:52.657: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Pending", Reason="", readiness=false. Elapsed: 35.300696ms
May 23 06:29:54.693: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071003841s
May 23 06:29:56.729: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106830194s
May 23 06:29:58.765: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142367506s
May 23 06:30:00.800: INFO: Pod "hostpath-symlink-prep-provisioning-9040": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177747468s
STEP: Saw pod success
May 23 06:30:00.800: INFO: Pod "hostpath-symlink-prep-provisioning-9040" satisfied condition "Succeeded or Failed"
May 23 06:30:00.800: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9040" in namespace "provisioning-9040"
May 23 06:30:00.840: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9040" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:00.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9040" for this suite.
... skipping 111 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:426
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:455
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:02.513: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 111 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should support inline execution and attach
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:551
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":2,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-windows] Windows volume mounts 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
May 23 06:30:03.974: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 105 lines ...
May 23 06:29:58.135: INFO: PersistentVolumeClaim pvc-g5vs7 found but phase is Pending instead of Bound.
May 23 06:30:00.169: INFO: PersistentVolumeClaim pvc-g5vs7 found and phase=Bound (14.276964987s)
May 23 06:30:00.170: INFO: Waiting up to 3m0s for PersistentVolume local-p9rl8 to have phase Bound
May 23 06:30:00.204: INFO: PersistentVolume local-p9rl8 found and phase=Bound (34.318072ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4cls
STEP: Creating a pod to test subpath
May 23 06:30:00.310: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4cls" in namespace "provisioning-6140" to be "Succeeded or Failed"
May 23 06:30:00.345: INFO: Pod "pod-subpath-test-preprovisionedpv-4cls": Phase="Pending", Reason="", readiness=false. Elapsed: 35.002379ms
May 23 06:30:02.380: INFO: Pod "pod-subpath-test-preprovisionedpv-4cls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070112407s
May 23 06:30:04.415: INFO: Pod "pod-subpath-test-preprovisionedpv-4cls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104963326s
STEP: Saw pod success
May 23 06:30:04.415: INFO: Pod "pod-subpath-test-preprovisionedpv-4cls" satisfied condition "Succeeded or Failed"
May 23 06:30:04.452: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-4cls container test-container-subpath-preprovisionedpv-4cls: <nil>
STEP: delete the pod
May 23 06:30:04.593: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4cls to disappear
May 23 06:30:04.630: INFO: Pod pod-subpath-test-preprovisionedpv-4cls no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4cls
May 23 06:30:04.630: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4cls" in namespace "provisioning-6140"
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:05.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-714" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":3,"skipped":13,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":26,"failed":0}
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:05.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename metrics-grabber
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:05.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2471" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":6,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:05.690: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 165 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-5e7dc705-adce-4227-be12-93269b229604
STEP: Creating a pod to test consume secrets
May 23 06:30:02.784: INFO: Waiting up to 5m0s for pod "pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a" in namespace "secrets-1911" to be "Succeeded or Failed"
May 23 06:30:02.818: INFO: Pod "pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.32832ms
May 23 06:30:04.851: INFO: Pod "pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066913363s
May 23 06:30:06.885: INFO: Pod "pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100783904s
May 23 06:30:08.919: INFO: Pod "pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134498791s
STEP: Saw pod success
May 23 06:30:08.919: INFO: Pod "pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a" satisfied condition "Succeeded or Failed"
May 23 06:30:08.952: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a container secret-volume-test: <nil>
STEP: delete the pod
May 23 06:30:09.030: INFO: Waiting for pod pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a to disappear
May 23 06:30:09.063: INFO: Pod pod-secrets-97ff1cfa-9659-49ad-b8d5-ebd17ff7c61a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.595 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:09.161: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 37 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:29:37.234: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
May 23 06:29:44.149: INFO: PersistentVolumeClaim pvc-gxl9p found but phase is Pending instead of Bound.
May 23 06:29:46.183: INFO: PersistentVolumeClaim pvc-gxl9p found and phase=Bound (8.179757381s)
May 23 06:29:46.183: INFO: Waiting up to 3m0s for PersistentVolume aws-x9c4r to have phase Bound
May 23 06:29:46.221: INFO: PersistentVolume aws-x9c4r found and phase=Bound (37.802844ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-469h
STEP: Creating a pod to test exec-volume-test
May 23 06:29:46.323: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-469h" in namespace "volume-7268" to be "Succeeded or Failed"
May 23 06:29:46.356: INFO: Pod "exec-volume-test-preprovisionedpv-469h": Phase="Pending", Reason="", readiness=false. Elapsed: 33.441141ms
May 23 06:29:48.390: INFO: Pod "exec-volume-test-preprovisionedpv-469h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067516285s
May 23 06:29:50.424: INFO: Pod "exec-volume-test-preprovisionedpv-469h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101432082s
May 23 06:29:52.459: INFO: Pod "exec-volume-test-preprovisionedpv-469h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135582932s
May 23 06:29:54.494: INFO: Pod "exec-volume-test-preprovisionedpv-469h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171032978s
May 23 06:29:56.528: INFO: Pod "exec-volume-test-preprovisionedpv-469h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205146815s
May 23 06:29:58.565: INFO: Pod "exec-volume-test-preprovisionedpv-469h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.242359s
STEP: Saw pod success
May 23 06:29:58.565: INFO: Pod "exec-volume-test-preprovisionedpv-469h" satisfied condition "Succeeded or Failed"
May 23 06:29:58.609: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-469h container exec-container-preprovisionedpv-469h: <nil>
STEP: delete the pod
May 23 06:29:58.697: INFO: Waiting for pod exec-volume-test-preprovisionedpv-469h to disappear
May 23 06:29:58.731: INFO: Pod exec-volume-test-preprovisionedpv-469h no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-469h
May 23 06:29:58.731: INFO: Deleting pod "exec-volume-test-preprovisionedpv-469h" in namespace "volume-7268"
STEP: Deleting pv and pvc
May 23 06:29:58.764: INFO: Deleting PersistentVolumeClaim "pvc-gxl9p"
May 23 06:29:58.799: INFO: Deleting PersistentVolume "aws-x9c4r"
May 23 06:29:58.987: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0d886f085d0871107", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d886f085d0871107 is currently attached to i-03e33b3471bcf6e9f
	status code: 400, request id: 8b2ff331-92a1-42ae-9c01-6a9fb4ff559c
May 23 06:30:04.283: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0d886f085d0871107", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d886f085d0871107 is currently attached to i-03e33b3471bcf6e9f
	status code: 400, request id: 026018ed-d20a-43e4-b001-fbd1a1808d16
May 23 06:30:09.545: INFO: Successfully deleted PD "aws://ca-central-1a/vol-0d886f085d0871107".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:09.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7268" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:09.625: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 120 lines ...
May 23 06:30:03.475: INFO: Pod aws-client still exists
May 23 06:30:05.435: INFO: Waiting for pod aws-client to disappear
May 23 06:30:05.469: INFO: Pod aws-client still exists
May 23 06:30:07.435: INFO: Waiting for pod aws-client to disappear
May 23 06:30:07.471: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
May 23 06:30:07.628: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0dbc31a7509105299", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0dbc31a7509105299 is currently attached to i-0ec4cc948b7b1f9be
	status code: 400, request id: e584c2d3-99a6-4557-8405-4cd27f6a9535
May 23 06:30:12.893: INFO: Successfully deleted PD "aws://ca-central-1a/vol-0dbc31a7509105299".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:12.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6986" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":2,"skipped":17,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 36 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":7,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:15.929: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 159 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:163
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:16.821: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 123 lines ...
May 23 06:29:09.116: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 23 06:29:09.116: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 23 06:29:09.116: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-33-aws-scltrpr      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-33    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{5368709120 0} {<nil>} 5Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-33-aws-scltrpr,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-33    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{5368709120 0} {<nil>} 5Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-33-aws-scltrpr,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-33-aws-scltrpr
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 23 06:29:09.277: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-g4jtk" in namespace "provisioning-33" to be "Succeeded or Failed"
May 23 06:29:09.311: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 34.552197ms
May 23 06:29:11.346: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069498286s
May 23 06:29:13.381: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104082997s
May 23 06:29:15.443: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165990427s
May 23 06:29:17.487: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209700752s
May 23 06:29:19.524: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.246971007s
May 23 06:29:21.559: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.281767303s
May 23 06:29:23.593: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.316593156s
May 23 06:29:25.628: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.351448123s
May 23 06:29:27.663: INFO: Pod "pvc-volume-tester-writer-g4jtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.386157993s
STEP: Saw pod success
May 23 06:29:27.663: INFO: Pod "pvc-volume-tester-writer-g4jtk" satisfied condition "Succeeded or Failed"
May 23 06:29:27.735: INFO: Pod pvc-volume-tester-writer-g4jtk has the following logs: 
May 23 06:29:27.735: INFO: Deleting pod "pvc-volume-tester-writer-g4jtk" in namespace "provisioning-33"
May 23 06:29:27.792: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-g4jtk" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-52-132.ca-central-1.compute.internal"
May 23 06:29:27.935: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-h8pl7" in namespace "provisioning-33" to be "Succeeded or Failed"
May 23 06:29:27.969: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.194021ms
May 23 06:29:30.005: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070131914s
May 23 06:29:32.040: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10527235s
May 23 06:29:34.102: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167176308s
May 23 06:29:36.137: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.201914574s
May 23 06:29:38.172: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236742983s
... skipping 2 lines ...
May 23 06:29:44.281: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.345682706s
May 23 06:29:46.317: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.382315314s
May 23 06:29:48.352: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.417530712s
May 23 06:29:50.388: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.452829888s
May 23 06:29:52.423: INFO: Pod "pvc-volume-tester-reader-h8pl7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.487887336s
STEP: Saw pod success
May 23 06:29:52.423: INFO: Pod "pvc-volume-tester-reader-h8pl7" satisfied condition "Succeeded or Failed"
May 23 06:29:52.465: INFO: Pod pvc-volume-tester-reader-h8pl7 has the following logs: hello world

May 23 06:29:52.466: INFO: Deleting pod "pvc-volume-tester-reader-h8pl7" in namespace "provisioning-33"
May 23 06:29:52.504: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-h8pl7" to be fully deleted
May 23 06:29:52.538: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-45pmg] to have phase Bound
May 23 06:29:52.573: INFO: PersistentVolumeClaim pvc-45pmg found and phase=Bound (34.188388ms)
... skipping 41 lines ...
May 23 06:29:31.204: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8444-aws-scdp49t
STEP: creating a claim
May 23 06:29:31.241: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-7d98
STEP: Creating a pod to test subpath
May 23 06:29:31.370: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-7d98" in namespace "provisioning-8444" to be "Succeeded or Failed"
May 23 06:29:31.409: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 39.270413ms
May 23 06:29:33.449: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079068292s
May 23 06:29:35.484: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113947592s
May 23 06:29:37.518: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148555426s
May 23 06:29:39.555: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185583705s
May 23 06:29:41.595: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.225578516s
... skipping 3 lines ...
May 23 06:29:49.753: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 18.383492575s
May 23 06:29:51.788: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 20.41831742s
May 23 06:29:53.823: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 22.453471975s
May 23 06:29:55.858: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Pending", Reason="", readiness=false. Elapsed: 24.488071696s
May 23 06:29:57.893: INFO: Pod "pod-subpath-test-dynamicpv-7d98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.522802512s
STEP: Saw pod success
May 23 06:29:57.893: INFO: Pod "pod-subpath-test-dynamicpv-7d98" satisfied condition "Succeeded or Failed"
May 23 06:29:57.927: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-7d98 container test-container-subpath-dynamicpv-7d98: <nil>
STEP: delete the pod
May 23 06:29:58.004: INFO: Waiting for pod pod-subpath-test-dynamicpv-7d98 to disappear
May 23 06:29:58.038: INFO: Pod pod-subpath-test-dynamicpv-7d98 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-7d98
May 23 06:29:58.038: INFO: Deleting pod "pod-subpath-test-dynamicpv-7d98" in namespace "provisioning-8444"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:18.513: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 140 lines ...
May 23 06:30:13.310: INFO: PersistentVolumeClaim pvc-k9rsd found but phase is Pending instead of Bound.
May 23 06:30:15.345: INFO: PersistentVolumeClaim pvc-k9rsd found and phase=Bound (6.137857153s)
May 23 06:30:15.345: INFO: Waiting up to 3m0s for PersistentVolume local-qfwwb to have phase Bound
May 23 06:30:15.379: INFO: PersistentVolume local-qfwwb found and phase=Bound (34.220379ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qffs
STEP: Creating a pod to test subpath
May 23 06:30:15.485: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qffs" in namespace "provisioning-3210" to be "Succeeded or Failed"
May 23 06:30:15.519: INFO: Pod "pod-subpath-test-preprovisionedpv-qffs": Phase="Pending", Reason="", readiness=false. Elapsed: 34.440812ms
May 23 06:30:17.555: INFO: Pod "pod-subpath-test-preprovisionedpv-qffs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069721196s
May 23 06:30:19.589: INFO: Pod "pod-subpath-test-preprovisionedpv-qffs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104395362s
STEP: Saw pod success
May 23 06:30:19.589: INFO: Pod "pod-subpath-test-preprovisionedpv-qffs" satisfied condition "Succeeded or Failed"
May 23 06:30:19.624: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-qffs container test-container-subpath-preprovisionedpv-qffs: <nil>
STEP: delete the pod
May 23 06:30:19.703: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qffs to disappear
May 23 06:30:19.739: INFO: Pod pod-subpath-test-preprovisionedpv-qffs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qffs
May 23 06:30:19.739: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qffs" in namespace "provisioning-3210"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 54 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:21.053: INFO: Driver nfs doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 19 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-c39ac482-3757-4670-bfae-2fb4bca49123
STEP: Creating a pod to test consume configMaps
May 23 06:30:19.101: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-750842f1-b023-4fb3-889b-c75868c94134" in namespace "projected-4172" to be "Succeeded or Failed"
May 23 06:30:19.135: INFO: Pod "pod-projected-configmaps-750842f1-b023-4fb3-889b-c75868c94134": Phase="Pending", Reason="", readiness=false. Elapsed: 34.345705ms
May 23 06:30:21.171: INFO: Pod "pod-projected-configmaps-750842f1-b023-4fb3-889b-c75868c94134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070415644s
STEP: Saw pod success
May 23 06:30:21.171: INFO: Pod "pod-projected-configmaps-750842f1-b023-4fb3-889b-c75868c94134" satisfied condition "Succeeded or Failed"
May 23 06:30:21.206: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-projected-configmaps-750842f1-b023-4fb3-889b-c75868c94134 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 23 06:30:21.288: INFO: Waiting for pod pod-projected-configmaps-750842f1-b023-4fb3-889b-c75868c94134 to disappear
May 23 06:30:21.323: INFO: Pod pod-projected-configmaps-750842f1-b023-4fb3-889b-c75868c94134 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:21.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4172" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":77,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:21.444: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 129 lines ...
• [SLOW TEST:16.573 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:22.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7123" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:22.482: INFO: Distro debian doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 11 lines ...
      Distro debian doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:180
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:20.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:30:20.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-082033c6-7b0c-4973-ba32-8563b73a690b" in namespace "downward-api-2182" to be "Succeeded or Failed"
May 23 06:30:20.306: INFO: Pod "downwardapi-volume-082033c6-7b0c-4973-ba32-8563b73a690b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.859968ms
May 23 06:30:22.342: INFO: Pod "downwardapi-volume-082033c6-7b0c-4973-ba32-8563b73a690b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.070325947s
STEP: Saw pod success
May 23 06:30:22.343: INFO: Pod "downwardapi-volume-082033c6-7b0c-4973-ba32-8563b73a690b" satisfied condition "Succeeded or Failed"
May 23 06:30:22.377: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod downwardapi-volume-082033c6-7b0c-4973-ba32-8563b73a690b container client-container: <nil>
STEP: delete the pod
May 23 06:30:22.460: INFO: Waiting for pod downwardapi-volume-082033c6-7b0c-4973-ba32-8563b73a690b to disappear
May 23 06:30:22.494: INFO: Pod downwardapi-volume-082033c6-7b0c-4973-ba32-8563b73a690b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:22.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2182" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:22.575: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
May 23 06:29:56.283: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:29:56.319: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-j26g
STEP: Creating a pod to test atomic-volume-subpath
May 23 06:29:56.358: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-j26g" in namespace "provisioning-2708" to be "Succeeded or Failed"
May 23 06:29:56.393: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Pending", Reason="", readiness=false. Elapsed: 34.629692ms
May 23 06:29:58.427: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069114479s
May 23 06:30:00.462: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103877877s
May 23 06:30:02.497: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13876535s
May 23 06:30:04.570: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212186226s
May 23 06:30:06.605: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Running", Reason="", readiness=true. Elapsed: 10.246863529s
... skipping 3 lines ...
May 23 06:30:14.754: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Running", Reason="", readiness=true. Elapsed: 18.395544029s
May 23 06:30:16.789: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Running", Reason="", readiness=true. Elapsed: 20.430543323s
May 23 06:30:18.824: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Running", Reason="", readiness=true. Elapsed: 22.465540379s
May 23 06:30:20.861: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Running", Reason="", readiness=true. Elapsed: 24.503118146s
May 23 06:30:22.932: INFO: Pod "pod-subpath-test-inlinevolume-j26g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.573581888s
STEP: Saw pod success
May 23 06:30:22.932: INFO: Pod "pod-subpath-test-inlinevolume-j26g" satisfied condition "Succeeded or Failed"
May 23 06:30:22.974: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-j26g container test-container-subpath-inlinevolume-j26g: <nil>
STEP: delete the pod
May 23 06:30:23.210: INFO: Waiting for pod pod-subpath-test-inlinevolume-j26g to disappear
May 23 06:30:23.249: INFO: Pod pod-subpath-test-inlinevolume-j26g no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-j26g
May 23 06:30:23.249: INFO: Deleting pod "pod-subpath-test-inlinevolume-j26g" in namespace "provisioning-2708"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":20,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:18.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
May 23 06:30:18.303: INFO: Waiting up to 5m0s for pod "downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0" in namespace "downward-api-9129" to be "Succeeded or Failed"
May 23 06:30:18.337: INFO: Pod "downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.209936ms
May 23 06:30:20.372: INFO: Pod "downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069210175s
May 23 06:30:22.406: INFO: Pod "downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103813576s
May 23 06:30:24.441: INFO: Pod "downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138627512s
STEP: Saw pod success
May 23 06:30:24.441: INFO: Pod "downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0" satisfied condition "Succeeded or Failed"
May 23 06:30:24.476: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0 container dapi-container: <nil>
STEP: delete the pod
May 23 06:30:24.557: INFO: Waiting for pod downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0 to disappear
May 23 06:30:24.592: INFO: Pod downward-api-0d1de524-e2e6-4e94-99a0-f66a6b7a4fb0 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.575 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:24.674: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 64 lines ...
May 23 06:30:14.243: INFO: PersistentVolumeClaim pvc-47mhk found but phase is Pending instead of Bound.
May 23 06:30:16.277: INFO: PersistentVolumeClaim pvc-47mhk found and phase=Bound (6.136156939s)
May 23 06:30:16.277: INFO: Waiting up to 3m0s for PersistentVolume local-87k5s to have phase Bound
May 23 06:30:16.310: INFO: PersistentVolume local-87k5s found and phase=Bound (33.45704ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-96kk
STEP: Creating a pod to test exec-volume-test
May 23 06:30:16.470: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-96kk" in namespace "volume-2105" to be "Succeeded or Failed"
May 23 06:30:16.510: INFO: Pod "exec-volume-test-preprovisionedpv-96kk": Phase="Pending", Reason="", readiness=false. Elapsed: 39.523269ms
May 23 06:30:18.544: INFO: Pod "exec-volume-test-preprovisionedpv-96kk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073662786s
May 23 06:30:20.578: INFO: Pod "exec-volume-test-preprovisionedpv-96kk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107381136s
May 23 06:30:22.611: INFO: Pod "exec-volume-test-preprovisionedpv-96kk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141081014s
May 23 06:30:24.646: INFO: Pod "exec-volume-test-preprovisionedpv-96kk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.175953631s
STEP: Saw pod success
May 23 06:30:24.646: INFO: Pod "exec-volume-test-preprovisionedpv-96kk" satisfied condition "Succeeded or Failed"
May 23 06:30:24.680: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-96kk container exec-container-preprovisionedpv-96kk: <nil>
STEP: delete the pod
May 23 06:30:24.767: INFO: Waiting for pod exec-volume-test-preprovisionedpv-96kk to disappear
May 23 06:30:24.821: INFO: Pod exec-volume-test-preprovisionedpv-96kk no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-96kk
May 23 06:30:24.822: INFO: Deleting pod "exec-volume-test-preprovisionedpv-96kk" in namespace "volume-2105"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:25.431: INFO: Driver gluster doesn't support ext4 -- skipping
... skipping 72 lines ...
• [SLOW TEST:12.489 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:22.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-5e26aba3-aa29-42be-b54f-7b600a1d7899
STEP: Creating a pod to test consume configMaps
May 23 06:30:22.755: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613" in namespace "projected-1210" to be "Succeeded or Failed"
May 23 06:30:22.789: INFO: Pod "pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613": Phase="Pending", Reason="", readiness=false. Elapsed: 33.671806ms
May 23 06:30:24.829: INFO: Pod "pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074010109s
May 23 06:30:26.905: INFO: Pod "pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150609187s
May 23 06:30:28.939: INFO: Pod "pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.184322099s
STEP: Saw pod success
May 23 06:30:28.939: INFO: Pod "pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613" satisfied condition "Succeeded or Failed"
May 23 06:30:28.972: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 23 06:30:29.048: INFO: Waiting for pod pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613 to disappear
May 23 06:30:29.081: INFO: Pod pod-projected-configmaps-0c285fc7-2810-4b0b-a971-28f061fe7613 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.637 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":8,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:29.187: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 423 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:31.417: INFO: Driver cinder doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 218 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":4,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:34.459: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 67 lines ...
May 23 06:30:30.243: INFO: Waiting for pod aws-client to disappear
May 23 06:30:30.284: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
May 23 06:30:30.284: INFO: Deleting PersistentVolumeClaim "pvc-fcfvs"
May 23 06:30:30.324: INFO: Deleting PersistentVolume "aws-vhqr2"
May 23 06:30:30.677: INFO: Couldn't delete PD "aws://ca-central-1a/vol-09365bef81820a8cf", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09365bef81820a8cf is currently attached to i-0ec4cc948b7b1f9be
	status code: 400, request id: 2e179a33-3a25-4626-a488-f10ef210ad86
May 23 06:30:35.974: INFO: Couldn't delete PD "aws://ca-central-1a/vol-09365bef81820a8cf", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-09365bef81820a8cf is currently attached to i-0ec4cc948b7b1f9be
	status code: 400, request id: 20811923-6d2e-43f8-8b08-e40d91e6cf46
May 23 06:30:41.253: INFO: Successfully deleted PD "aws://ca-central-1a/vol-09365bef81820a8cf".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:41.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4475" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:41.343: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
May 23 06:30:28.941: INFO: PersistentVolumeClaim pvc-sxgvl found but phase is Pending instead of Bound.
May 23 06:30:30.976: INFO: PersistentVolumeClaim pvc-sxgvl found and phase=Bound (2.068726611s)
May 23 06:30:30.976: INFO: Waiting up to 3m0s for PersistentVolume local-tscgx to have phase Bound
May 23 06:30:31.011: INFO: PersistentVolume local-tscgx found and phase=Bound (34.319903ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rjjz
STEP: Creating a pod to test subpath
May 23 06:30:31.115: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rjjz" in namespace "provisioning-7363" to be "Succeeded or Failed"
May 23 06:30:31.150: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz": Phase="Pending", Reason="", readiness=false. Elapsed: 34.443807ms
May 23 06:30:33.185: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06945128s
May 23 06:30:35.236: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120221494s
STEP: Saw pod success
May 23 06:30:35.236: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz" satisfied condition "Succeeded or Failed"
May 23 06:30:35.272: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-rjjz container test-container-subpath-preprovisionedpv-rjjz: <nil>
STEP: delete the pod
May 23 06:30:35.376: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rjjz to disappear
May 23 06:30:35.411: INFO: Pod pod-subpath-test-preprovisionedpv-rjjz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rjjz
May 23 06:30:35.411: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rjjz" in namespace "provisioning-7363"
STEP: Creating pod pod-subpath-test-preprovisionedpv-rjjz
STEP: Creating a pod to test subpath
May 23 06:30:35.491: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rjjz" in namespace "provisioning-7363" to be "Succeeded or Failed"
May 23 06:30:35.531: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz": Phase="Pending", Reason="", readiness=false. Elapsed: 39.272461ms
May 23 06:30:37.566: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074350052s
May 23 06:30:39.602: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11010487s
STEP: Saw pod success
May 23 06:30:39.602: INFO: Pod "pod-subpath-test-preprovisionedpv-rjjz" satisfied condition "Succeeded or Failed"
May 23 06:30:39.637: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-rjjz container test-container-subpath-preprovisionedpv-rjjz: <nil>
STEP: delete the pod
May 23 06:30:39.724: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rjjz to disappear
May 23 06:30:39.759: INFO: Pod pod-subpath-test-preprovisionedpv-rjjz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rjjz
May 23 06:30:39.759: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rjjz" in namespace "provisioning-7363"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:41.394: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 35 lines ...
• [SLOW TEST:20.038 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":8,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:41.603: INFO: Driver windows-gcepd doesn't support ext3 -- skipping
... skipping 82 lines ...
• [SLOW TEST:15.108 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:44.432: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 90 lines ...
May 23 06:30:28.985: INFO: PersistentVolumeClaim pvc-vqmjr found but phase is Pending instead of Bound.
May 23 06:30:31.019: INFO: PersistentVolumeClaim pvc-vqmjr found and phase=Bound (8.186674791s)
May 23 06:30:31.019: INFO: Waiting up to 3m0s for PersistentVolume local-rgjkr to have phase Bound
May 23 06:30:31.052: INFO: PersistentVolume local-rgjkr found and phase=Bound (33.425016ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4t6w
STEP: Creating a pod to test subpath
May 23 06:30:31.162: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4t6w" in namespace "provisioning-896" to be "Succeeded or Failed"
May 23 06:30:31.198: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w": Phase="Pending", Reason="", readiness=false. Elapsed: 35.841499ms
May 23 06:30:33.233: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070522071s
May 23 06:30:35.271: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108985814s
May 23 06:30:37.309: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147031899s
May 23 06:30:39.360: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19775849s
May 23 06:30:41.394: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.231507637s
May 23 06:30:43.432: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.270178381s
STEP: Saw pod success
May 23 06:30:43.432: INFO: Pod "pod-subpath-test-preprovisionedpv-4t6w" satisfied condition "Succeeded or Failed"
May 23 06:30:43.466: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-4t6w container test-container-volume-preprovisionedpv-4t6w: <nil>
STEP: delete the pod
May 23 06:30:43.548: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4t6w to disappear
May 23 06:30:43.582: INFO: Pod pod-subpath-test-preprovisionedpv-4t6w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4t6w
May 23 06:30:43.582: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4t6w" in namespace "provisioning-896"
... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:46.290: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 27 lines ...
May 23 06:30:09.384: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-320-aws-sc7nkpx
STEP: creating a claim
May 23 06:30:09.418: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-tr46
STEP: Creating a pod to test subpath
May 23 06:30:09.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tr46" in namespace "provisioning-320" to be "Succeeded or Failed"
May 23 06:30:09.562: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 33.222813ms
May 23 06:30:11.597: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068262325s
May 23 06:30:13.631: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102129961s
May 23 06:30:15.666: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137254295s
May 23 06:30:17.700: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171275274s
May 23 06:30:19.738: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.209106577s
May 23 06:30:21.775: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 12.246249014s
May 23 06:30:23.809: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Pending", Reason="", readiness=false. Elapsed: 14.280033359s
May 23 06:30:25.843: INFO: Pod "pod-subpath-test-dynamicpv-tr46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.313761166s
STEP: Saw pod success
May 23 06:30:25.843: INFO: Pod "pod-subpath-test-dynamicpv-tr46" satisfied condition "Succeeded or Failed"
May 23 06:30:25.878: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-tr46 container test-container-volume-dynamicpv-tr46: <nil>
STEP: delete the pod
May 23 06:30:25.964: INFO: Waiting for pod pod-subpath-test-dynamicpv-tr46 to disappear
May 23 06:30:25.998: INFO: Pod pod-subpath-test-dynamicpv-tr46 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tr46
May 23 06:30:25.998: INFO: Deleting pod "pod-subpath-test-dynamicpv-tr46" in namespace "provisioning-320"
... skipping 35 lines ...
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-2598
[It] should adopt matching orphans and release non-matching pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:163
STEP: Creating statefulset ss in namespace statefulset-2598
May 23 06:30:46.568: INFO: error finding default storageClass : No default storage class found
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
May 23 06:30:46.569: INFO: Deleting all statefulset in ns statefulset-2598
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:46.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should adopt matching orphans and release non-matching pods [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:163

    error finding default storageClass : No default storage class found

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:825
------------------------------
SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 37 lines ...
      Driver azure-disk doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:174
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:46.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:46.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9607" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:46.845: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 38 lines ...
STEP: creating a claim
May 23 06:30:24.646: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 23 06:30:24.681: INFO: Waiting up to 5m0s for PersistentVolumeClaims [nfs4vnmz] to have phase Bound
May 23 06:30:24.717: INFO: PersistentVolumeClaim nfs4vnmz found and phase=Bound (35.443865ms)
STEP: Creating pod pod-subpath-test-dynamicpv-d6j5
STEP: Creating a pod to test subpath
May 23 06:30:24.869: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-d6j5" in namespace "provisioning-5628" to be "Succeeded or Failed"
May 23 06:30:24.914: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.236842ms
May 23 06:30:26.952: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082584761s
May 23 06:30:28.986: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116737395s
May 23 06:30:31.020: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151027514s
May 23 06:30:33.055: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185724857s
May 23 06:30:35.091: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221600428s
May 23 06:30:37.128: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.258220736s
May 23 06:30:39.163: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.293112579s
May 23 06:30:41.197: INFO: Pod "pod-subpath-test-dynamicpv-d6j5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.328010317s
STEP: Saw pod success
May 23 06:30:41.198: INFO: Pod "pod-subpath-test-dynamicpv-d6j5" satisfied condition "Succeeded or Failed"
May 23 06:30:41.232: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-d6j5 container test-container-subpath-dynamicpv-d6j5: <nil>
STEP: delete the pod
May 23 06:30:41.311: INFO: Waiting for pod pod-subpath-test-dynamicpv-d6j5 to disappear
May 23 06:30:41.348: INFO: Pod pod-subpath-test-dynamicpv-d6j5 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-d6j5
May 23 06:30:41.348: INFO: Deleting pod "pod-subpath-test-dynamicpv-d6j5" in namespace "provisioning-5628"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":26,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:47.785: INFO: Only supported for providers [azure] (not aws)
... skipping 125 lines ...
• [SLOW TEST:8.504 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":107,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:50.391: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 214 lines ...
May 23 06:30:50.757: INFO: AfterEach: Cleaning up test resources.
May 23 06:30:50.757: INFO: Deleting PersistentVolumeClaim "pvc-gmjr5"
May 23 06:30:50.791: INFO: Deleting PersistentVolume "hostpath-6d5rj"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":10,"skipped":109,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:50.846: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 39 lines ...
May 23 06:30:44.976: INFO: PersistentVolumeClaim pvc-k645c found but phase is Pending instead of Bound.
May 23 06:30:47.011: INFO: PersistentVolumeClaim pvc-k645c found and phase=Bound (4.103580505s)
May 23 06:30:47.011: INFO: Waiting up to 3m0s for PersistentVolume local-shrq9 to have phase Bound
May 23 06:30:47.045: INFO: PersistentVolume local-shrq9 found and phase=Bound (34.126042ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pl42
STEP: Creating a pod to test subpath
May 23 06:30:47.149: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pl42" in namespace "provisioning-4828" to be "Succeeded or Failed"
May 23 06:30:47.183: INFO: Pod "pod-subpath-test-preprovisionedpv-pl42": Phase="Pending", Reason="", readiness=false. Elapsed: 34.088194ms
May 23 06:30:49.218: INFO: Pod "pod-subpath-test-preprovisionedpv-pl42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068750865s
May 23 06:30:51.252: INFO: Pod "pod-subpath-test-preprovisionedpv-pl42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103088205s
STEP: Saw pod success
May 23 06:30:51.252: INFO: Pod "pod-subpath-test-preprovisionedpv-pl42" satisfied condition "Succeeded or Failed"
May 23 06:30:51.290: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-pl42 container test-container-subpath-preprovisionedpv-pl42: <nil>
STEP: delete the pod
May 23 06:30:51.381: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pl42 to disappear
May 23 06:30:51.415: INFO: Pod pod-subpath-test-preprovisionedpv-pl42 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pl42
May 23 06:30:51.415: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pl42" in namespace "provisioning-4828"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:52.102: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 88 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:30:48.063: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f" in namespace "downward-api-6362" to be "Succeeded or Failed"
May 23 06:30:48.097: INFO: Pod "downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.025744ms
May 23 06:30:50.132: INFO: Pod "downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068658085s
May 23 06:30:52.171: INFO: Pod "downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107825943s
STEP: Saw pod success
May 23 06:30:52.171: INFO: Pod "downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f" satisfied condition "Succeeded or Failed"
May 23 06:30:52.207: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f container client-container: <nil>
STEP: delete the pod
May 23 06:30:52.290: INFO: Waiting for pod downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f to disappear
May 23 06:30:52.325: INFO: Pod downwardapi-volume-8a7ad68b-f367-487e-8d5a-54326bab3f1f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:30:52.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6362" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:52.412: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 47 lines ...
May 23 06:30:31.766: INFO: PersistentVolumeClaim pvc-7ds8t found and phase=Bound (14.285433718s)
May 23 06:30:31.766: INFO: Waiting up to 3m0s for PersistentVolume nfs-bc2zc to have phase Bound
May 23 06:30:31.801: INFO: PersistentVolume nfs-bc2zc found and phase=Bound (34.904867ms)
STEP: Checking pod has write access to PersistentVolume
May 23 06:30:31.873: INFO: Creating nfs test pod
May 23 06:30:31.910: INFO: Pod should terminate with exitcode 0 (success)
May 23 06:30:31.910: INFO: Waiting up to 5m0s for pod "pvc-tester-rk7xr" in namespace "pv-5021" to be "Succeeded or Failed"
May 23 06:30:31.945: INFO: Pod "pvc-tester-rk7xr": Phase="Pending", Reason="", readiness=false. Elapsed: 35.281627ms
May 23 06:30:33.986: INFO: Pod "pvc-tester-rk7xr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075896013s
May 23 06:30:36.026: INFO: Pod "pvc-tester-rk7xr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115941302s
May 23 06:30:38.074: INFO: Pod "pvc-tester-rk7xr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163998517s
May 23 06:30:40.109: INFO: Pod "pvc-tester-rk7xr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199172453s
May 23 06:30:42.145: INFO: Pod "pvc-tester-rk7xr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.234911079s
STEP: Saw pod success
May 23 06:30:42.145: INFO: Pod "pvc-tester-rk7xr" satisfied condition "Succeeded or Failed"
May 23 06:30:42.145: INFO: Pod pvc-tester-rk7xr succeeded 
May 23 06:30:42.145: INFO: Deleting pod "pvc-tester-rk7xr" in namespace "pv-5021"
May 23 06:30:42.183: INFO: Wait up to 5m0s for pod "pvc-tester-rk7xr" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
May 23 06:30:42.218: INFO: Deleting PVC pvc-7ds8t to trigger reclamation of PV 
May 23 06:30:42.218: INFO: Deleting PersistentVolumeClaim "pvc-7ds8t"
... skipping 46 lines ...
• [SLOW TEST:7.353 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":7,"skipped":50,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:54.181: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 61 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Driver emptydir doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:174
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:94
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
... skipping 8 lines ...
May 23 06:30:25.254: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-7131-aws-scnjrxl
STEP: creating a claim
May 23 06:30:25.289: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
May 23 06:30:25.360: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
May 23 06:30:25.445: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:27.516: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:29.514: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:31.513: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:33.513: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:35.515: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:37.516: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:39.556: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:41.513: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:43.512: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:45.515: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:47.512: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:49.513: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:51.518: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:53.514: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:55.515: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7131-aws-scnjrxl",
  	... // 2 identical fields
  }

May 23 06:30:55.582: INFO: Error updating pvc awsw5fgn: PersistentVolumeClaim "awsw5fgn" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":6,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:30:55.799: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 115 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":6,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:00.667: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 42 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-0ba7513c-e337-4854-bb92-d991778157be
STEP: Creating a pod to test consume configMaps
May 23 06:30:52.405: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802" in namespace "configmap-4776" to be "Succeeded or Failed"
May 23 06:30:52.439: INFO: Pod "pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802": Phase="Pending", Reason="", readiness=false. Elapsed: 33.892913ms
May 23 06:30:54.474: INFO: Pod "pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068386958s
May 23 06:30:56.508: INFO: Pod "pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1029401s
May 23 06:30:58.544: INFO: Pod "pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138521942s
May 23 06:31:00.582: INFO: Pod "pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176489708s
STEP: Saw pod success
May 23 06:31:00.582: INFO: Pod "pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802" satisfied condition "Succeeded or Failed"
May 23 06:31:00.616: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802 container configmap-volume-test: <nil>
STEP: delete the pod
May 23 06:31:00.692: INFO: Waiting for pod pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802 to disappear
May 23 06:31:00.728: INFO: Pod pod-configmaps-5c1c957b-45cc-4e59-8c52-3d9d2e549802 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.645 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:110
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:00.814: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 19 lines ...
      Driver cinder doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:174
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":38,"failed":0}
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:00.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename disruption
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:01.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-5632" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets","total":-1,"completed":9,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:01.668: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:30:56.013: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860" in namespace "downward-api-5987" to be "Succeeded or Failed"
May 23 06:30:56.051: INFO: Pod "downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860": Phase="Pending", Reason="", readiness=false. Elapsed: 37.951312ms
May 23 06:30:58.085: INFO: Pod "downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07261713s
May 23 06:31:00.125: INFO: Pod "downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112685347s
May 23 06:31:02.159: INFO: Pod "downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146531809s
May 23 06:31:04.193: INFO: Pod "downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180580637s
STEP: Saw pod success
May 23 06:31:04.193: INFO: Pod "downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860" satisfied condition "Succeeded or Failed"
May 23 06:31:04.227: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860 container client-container: <nil>
STEP: delete the pod
May 23 06:31:04.314: INFO: Waiting for pod downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860 to disappear
May 23 06:31:04.347: INFO: Pod downwardapi-volume-0d35ecbb-8d7a-4921-8eaa-0ddd1d46f860 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.608 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:88
May 23 06:31:04.428: INFO: Driver "csi-hostpath" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 4 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:05.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-4845" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":8,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:05.552: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 39 lines ...
May 23 06:30:58.353: INFO: PersistentVolumeClaim pvc-vjn69 found but phase is Pending instead of Bound.
May 23 06:31:00.386: INFO: PersistentVolumeClaim pvc-vjn69 found and phase=Bound (4.100909233s)
May 23 06:31:00.386: INFO: Waiting up to 3m0s for PersistentVolume local-84svb to have phase Bound
May 23 06:31:00.420: INFO: PersistentVolume local-84svb found and phase=Bound (33.303807ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-q58h
STEP: Creating a pod to test exec-volume-test
May 23 06:31:00.522: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-q58h" in namespace "volume-8206" to be "Succeeded or Failed"
May 23 06:31:00.559: INFO: Pod "exec-volume-test-preprovisionedpv-q58h": Phase="Pending", Reason="", readiness=false. Elapsed: 36.649932ms
May 23 06:31:02.596: INFO: Pod "exec-volume-test-preprovisionedpv-q58h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073802013s
May 23 06:31:04.631: INFO: Pod "exec-volume-test-preprovisionedpv-q58h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109431686s
STEP: Saw pod success
May 23 06:31:04.631: INFO: Pod "exec-volume-test-preprovisionedpv-q58h" satisfied condition "Succeeded or Failed"
May 23 06:31:04.665: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-q58h container exec-container-preprovisionedpv-q58h: <nil>
STEP: delete the pod
May 23 06:31:04.750: INFO: Waiting for pod exec-volume-test-preprovisionedpv-q58h to disappear
May 23 06:31:04.784: INFO: Pod exec-volume-test-preprovisionedpv-q58h no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-q58h
May 23 06:31:04.784: INFO: Deleting pod "exec-volume-test-preprovisionedpv-q58h" in namespace "volume-8206"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":113,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:06.280: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 223 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:752
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:805
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":3,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:08.945: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 46 lines ...
• [SLOW TEST:16.758 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:09.189: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 48 lines ...
• [SLOW TEST:9.130 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":10,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
May 23 06:30:59.023: INFO: PersistentVolumeClaim pvc-jrgtf found but phase is Pending instead of Bound.
May 23 06:31:01.058: INFO: PersistentVolumeClaim pvc-jrgtf found and phase=Bound (4.1042132s)
May 23 06:31:01.058: INFO: Waiting up to 3m0s for PersistentVolume local-kfjnf to have phase Bound
May 23 06:31:01.092: INFO: PersistentVolume local-kfjnf found and phase=Bound (34.392956ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ff7l
STEP: Creating a pod to test subpath
May 23 06:31:01.197: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ff7l" in namespace "provisioning-8165" to be "Succeeded or Failed"
May 23 06:31:01.232: INFO: Pod "pod-subpath-test-preprovisionedpv-ff7l": Phase="Pending", Reason="", readiness=false. Elapsed: 34.495409ms
May 23 06:31:03.303: INFO: Pod "pod-subpath-test-preprovisionedpv-ff7l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105880299s
May 23 06:31:05.339: INFO: Pod "pod-subpath-test-preprovisionedpv-ff7l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141814625s
May 23 06:31:07.374: INFO: Pod "pod-subpath-test-preprovisionedpv-ff7l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176831026s
May 23 06:31:09.411: INFO: Pod "pod-subpath-test-preprovisionedpv-ff7l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.213627587s
May 23 06:31:11.448: INFO: Pod "pod-subpath-test-preprovisionedpv-ff7l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.250846717s
STEP: Saw pod success
May 23 06:31:11.448: INFO: Pod "pod-subpath-test-preprovisionedpv-ff7l" satisfied condition "Succeeded or Failed"
May 23 06:31:11.483: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-ff7l container test-container-volume-preprovisionedpv-ff7l: <nil>
STEP: delete the pod
May 23 06:31:11.572: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ff7l to disappear
May 23 06:31:11.606: INFO: Pod pod-subpath-test-preprovisionedpv-ff7l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ff7l
May 23 06:31:11.606: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ff7l" in namespace "provisioning-8165"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":61,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:12.218: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:12.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-1034" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":9,"skipped":67,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:12.573: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 226 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:244
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:245
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:00.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-e1d112e5-5386-4e4e-b0e9-f8599f10b255
STEP: Creating a pod to test consume secrets
May 23 06:31:00.939: INFO: Waiting up to 5m0s for pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4" in namespace "secrets-954" to be "Succeeded or Failed"
May 23 06:31:00.972: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.055523ms
May 23 06:31:03.008: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068588179s
May 23 06:31:05.042: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102370423s
May 23 06:31:07.077: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137535067s
May 23 06:31:09.110: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171130778s
May 23 06:31:11.144: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204682429s
May 23 06:31:13.178: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.238658529s
STEP: Saw pod success
May 23 06:31:13.178: INFO: Pod "pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4" satisfied condition "Succeeded or Failed"
May 23 06:31:13.211: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4 container secret-volume-test: <nil>
STEP: delete the pod
May 23 06:31:13.287: INFO: Waiting for pod pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4 to disappear
May 23 06:31:13.320: INFO: Pod pod-secrets-152ae3a4-1800-4dac-ad5f-a753259d51c4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 13 lines ...
May 23 06:31:08.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test override command
May 23 06:31:09.164: INFO: Waiting up to 5m0s for pod "client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d" in namespace "containers-6592" to be "Succeeded or Failed"
May 23 06:31:09.198: INFO: Pod "client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.293973ms
May 23 06:31:11.233: INFO: Pod "client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068964036s
May 23 06:31:13.268: INFO: Pod "client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103782848s
STEP: Saw pod success
May 23 06:31:13.268: INFO: Pod "client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d" satisfied condition "Succeeded or Failed"
May 23 06:31:13.302: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d container test-container: <nil>
STEP: delete the pod
May 23 06:31:13.386: INFO: Waiting for pod client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d to disappear
May 23 06:31:13.420: INFO: Pod client-containers-387eca76-2be9-451e-a1d5-4a64ff05f86d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:13.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6592" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:13.513: INFO: Driver windows-gcepd doesn't support  -- skipping
... skipping 35 lines ...
      Distro debian doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:180
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:88
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:13.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
May 23 06:31:13.607: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:ca-central-1a]
May 23 06:31:13.607: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
May 23 06:31:13.607: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 300 lines ...
STEP: Destroying namespace "services-3782" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":8,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 5 lines ...
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103
STEP: Creating service test in namespace statefulset-1390
[It] should not deadlock when a pod's predecessor fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248
STEP: Creating statefulset ss in namespace statefulset-1390
May 23 06:31:15.341: INFO: error finding default storageClass : No default storage class found
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114
May 23 06:31:15.342: INFO: Deleting all statefulset in ns statefulset-1390
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:15.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should not deadlock when a pod's predecessor fails [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:248

    error finding default storageClass : No default storage class found

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pv/pv.go:825
------------------------------
S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 157 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
May 23 06:29:51.636: INFO: Pod pod-submit-status-2-1 on node ip-172-20-36-181.ca-central-1.compute.internal timings total=8.312934509s t=1.192s run=1s execute=0s
May 23 06:29:54.448: INFO: watch delete seen for pod-submit-status-2-2
May 23 06:29:54.448: INFO: Pod pod-submit-status-2-2 on node ip-172-20-41-57.ca-central-1.compute.internal timings total=2.811247387s t=436ms run=0s execute=0s
May 23 06:29:57.635: INFO: watch delete seen for pod-submit-status-1-2
May 23 06:29:57.635: INFO: Pod pod-submit-status-1-2 on node ip-172-20-36-181.ca-central-1.compute.internal timings total=10.000947221s t=899ms run=2s execute=0s
May 23 06:29:58.353: INFO: watch delete seen for pod-submit-status-0-0
May 23 06:29:58.353: INFO: pod pod-submit-status-0-0 on node ip-172-20-52-132.ca-central-1.compute.internal failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766
May 23 06:29:58.353: INFO: pod pod-submit-status-0-0 on node ip-172-20-52-132.ca-central-1.compute.internal failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766
May 23 06:29:58.353: INFO: pod pod-submit-status-0-0 on node ip-172-20-52-132.ca-central-1.compute.internal failed with the symptoms of https://github.com/kubernetes/kubernetes/issues/88766
May 23 06:29:58.353: INFO: Pod pod-submit-status-0-0 on node ip-172-20-52-132.ca-central-1.compute.internal timings total=18.615124203s t=1.422s run=2s execute=0s
May 23 06:30:00.849: INFO: watch delete seen for pod-submit-status-2-3
May 23 06:30:00.849: INFO: Pod pod-submit-status-2-3 on node ip-172-20-41-57.ca-central-1.compute.internal timings total=6.401356489s t=1.743s run=0s execute=0s
May 23 06:30:02.237: INFO: watch delete seen for pod-submit-status-0-1
May 23 06:30:02.238: INFO: Pod pod-submit-status-0-1 on node ip-172-20-36-181.ca-central-1.compute.internal timings total=3.884200225s t=211ms run=0s execute=0s
May 23 06:30:04.835: INFO: watch delete seen for pod-submit-status-1-3
... skipping 79 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  [k8s.io] Pod Container Status
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should never report success for a pending container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:208
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container Status should never report success for a pending container","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":9,"skipped":30,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:06.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-688c7351-b1a5-4a0d-8594-9f65aa4adf26
STEP: Creating a pod to test consume secrets
May 23 06:31:06.745: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3" in namespace "projected-6650" to be "Succeeded or Failed"
May 23 06:31:06.778: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3": Phase="Pending", Reason="", readiness=false. Elapsed: 33.415212ms
May 23 06:31:08.812: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067458613s
May 23 06:31:10.848: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103697117s
May 23 06:31:12.882: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137542006s
May 23 06:31:14.916: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171348789s
May 23 06:31:16.950: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205378402s
May 23 06:31:18.984: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.23938159s
STEP: Saw pod success
May 23 06:31:18.984: INFO: Pod "pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3" satisfied condition "Succeeded or Failed"
May 23 06:31:19.018: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 23 06:31:19.095: INFO: Waiting for pod pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3 to disappear
May 23 06:31:19.129: INFO: Pod pod-projected-secrets-98d4adda-a9b6-4bf4-8d76-434b8f90bba3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:12.694 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:29:55.308: INFO: >>> kubeConfig: /root/.kube/config
... skipping 72 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:19.208: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192

      Driver hostPathSymlink doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:19.216: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 98 lines ...
May 23 06:30:43.971: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 23 06:30:43.971: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
May 23 06:30:43.971: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-4582-nfs-scdm8lt      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:example.com/nfs-provisioning-4582,Parameters:map[string]string{mountOptions: vers=4.1,},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4582    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{5368709120 0} {<nil>} 5Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4582-nfs-scdm8lt,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4582    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{5368709120 0} {<nil>} 5Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4582-nfs-scdm8lt,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a StorageClass provisioning-4582-nfs-scdm8lt
STEP: creating a claim
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
May 23 06:30:44.108: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-wlrp4" in namespace "provisioning-4582" to be "Succeeded or Failed"
May 23 06:30:44.141: INFO: Pod "pvc-volume-tester-writer-wlrp4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.329077ms
May 23 06:30:46.176: INFO: Pod "pvc-volume-tester-writer-wlrp4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068718124s
May 23 06:30:48.210: INFO: Pod "pvc-volume-tester-writer-wlrp4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102480766s
May 23 06:30:50.244: INFO: Pod "pvc-volume-tester-writer-wlrp4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136251542s
May 23 06:30:52.279: INFO: Pod "pvc-volume-tester-writer-wlrp4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17142637s
May 23 06:30:54.313: INFO: Pod "pvc-volume-tester-writer-wlrp4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205464368s
May 23 06:30:56.347: INFO: Pod "pvc-volume-tester-writer-wlrp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.239566013s
STEP: Saw pod success
May 23 06:30:56.347: INFO: Pod "pvc-volume-tester-writer-wlrp4" satisfied condition "Succeeded or Failed"
May 23 06:30:56.418: INFO: Pod pvc-volume-tester-writer-wlrp4 has the following logs: 
May 23 06:30:56.418: INFO: Deleting pod "pvc-volume-tester-writer-wlrp4" in namespace "provisioning-4582"
May 23 06:30:56.457: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-wlrp4" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-41-57.ca-central-1.compute.internal"
May 23 06:30:56.594: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-c6g2c" in namespace "provisioning-4582" to be "Succeeded or Failed"
May 23 06:30:56.628: INFO: Pod "pvc-volume-tester-reader-c6g2c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.306482ms
May 23 06:30:58.662: INFO: Pod "pvc-volume-tester-reader-c6g2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067188147s
May 23 06:31:00.695: INFO: Pod "pvc-volume-tester-reader-c6g2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100788442s
May 23 06:31:02.729: INFO: Pod "pvc-volume-tester-reader-c6g2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134556279s
May 23 06:31:04.763: INFO: Pod "pvc-volume-tester-reader-c6g2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16833416s
May 23 06:31:06.800: INFO: Pod "pvc-volume-tester-reader-c6g2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.205643924s
STEP: Saw pod success
May 23 06:31:06.800: INFO: Pod "pvc-volume-tester-reader-c6g2c" satisfied condition "Succeeded or Failed"
May 23 06:31:06.837: INFO: Pod pvc-volume-tester-reader-c6g2c has the following logs: hello world

May 23 06:31:06.837: INFO: Deleting pod "pvc-volume-tester-reader-c6g2c" in namespace "provisioning-4582"
May 23 06:31:06.876: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-c6g2c" to be fully deleted
May 23 06:31:06.910: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-qfmx5] to have phase Bound
May 23 06:31:06.943: INFO: PersistentVolumeClaim pvc-qfmx5 found and phase=Bound (33.252874ms)
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":5,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] crictl
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 243 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":139,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:10.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name projected-secret-test-69f59dbe-d84a-4b7b-84b5-2fb2db377b88
STEP: Creating a pod to test consume secrets
May 23 06:31:10.567: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e" in namespace "projected-3417" to be "Succeeded or Failed"
May 23 06:31:10.605: INFO: Pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e": Phase="Pending", Reason="", readiness=false. Elapsed: 38.329017ms
May 23 06:31:12.642: INFO: Pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075168133s
May 23 06:31:14.676: INFO: Pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10978292s
May 23 06:31:16.711: INFO: Pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144491607s
May 23 06:31:18.746: INFO: Pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179351598s
May 23 06:31:20.838: INFO: Pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.271588933s
STEP: Saw pod success
May 23 06:31:20.838: INFO: Pod "pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e" satisfied condition "Succeeded or Failed"
May 23 06:31:20.937: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e container projected-secret-volume-test: <nil>
STEP: delete the pod
May 23 06:31:21.431: INFO: Waiting for pod pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e to disappear
May 23 06:31:21.524: INFO: Pod pod-projected-secrets-e5947db1-3938-4d11-8927-83950f42564e no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.405 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":41,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:21.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:22.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2021" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":12,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:22.303: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:107
STEP: Creating a pod to test downward API volume plugin
May 23 06:31:16.035: INFO: Waiting up to 5m0s for pod "metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1" in namespace "downward-api-6456" to be "Succeeded or Failed"
May 23 06:31:16.068: INFO: Pod "metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1": Phase="Pending", Reason="", readiness=false. Elapsed: 33.27ms
May 23 06:31:18.129: INFO: Pod "metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094416113s
May 23 06:31:20.213: INFO: Pod "metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178014311s
May 23 06:31:22.256: INFO: Pod "metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.221333526s
STEP: Saw pod success
May 23 06:31:22.256: INFO: Pod "metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1" satisfied condition "Succeeded or Failed"
May 23 06:31:22.292: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1 container client-container: <nil>
STEP: delete the pod
May 23 06:31:22.387: INFO: Waiting for pod metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1 to disappear
May 23 06:31:22.421: INFO: Pod metadata-volume-871f4c4e-3228-4206-a09a-b92e8dc866d1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.677 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:107
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:22.515: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 72 lines ...
May 23 06:30:36.467: INFO: PersistentVolumeClaim csi-hostpathh67lk found but phase is Pending instead of Bound.
May 23 06:30:38.502: INFO: PersistentVolumeClaim csi-hostpathh67lk found but phase is Pending instead of Bound.
May 23 06:30:40.536: INFO: PersistentVolumeClaim csi-hostpathh67lk found but phase is Pending instead of Bound.
May 23 06:30:42.571: INFO: PersistentVolumeClaim csi-hostpathh67lk found and phase=Bound (6.164704629s)
STEP: Creating pod pod-subpath-test-dynamicpv-nzst
STEP: Creating a pod to test subpath
May 23 06:30:42.680: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nzst" in namespace "provisioning-6143" to be "Succeeded or Failed"
May 23 06:30:42.714: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 34.731085ms
May 23 06:30:44.749: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069375458s
May 23 06:30:46.793: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113594656s
May 23 06:30:48.828: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148621559s
May 23 06:30:50.863: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183209681s
May 23 06:30:52.898: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 10.218104895s
May 23 06:30:54.932: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 12.252685781s
May 23 06:30:56.967: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 14.287483895s
May 23 06:30:59.002: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 16.322345623s
May 23 06:31:01.037: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Pending", Reason="", readiness=false. Elapsed: 18.35716402s
May 23 06:31:03.072: INFO: Pod "pod-subpath-test-dynamicpv-nzst": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.392263943s
STEP: Saw pod success
May 23 06:31:03.072: INFO: Pod "pod-subpath-test-dynamicpv-nzst" satisfied condition "Succeeded or Failed"
May 23 06:31:03.111: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-nzst container test-container-subpath-dynamicpv-nzst: <nil>
STEP: delete the pod
May 23 06:31:03.311: INFO: Waiting for pod pod-subpath-test-dynamicpv-nzst to disappear
May 23 06:31:03.373: INFO: Pod pod-subpath-test-dynamicpv-nzst no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-nzst
May 23 06:31:03.373: INFO: Deleting pod "pod-subpath-test-dynamicpv-nzst" in namespace "provisioning-6143"
... skipping 79 lines ...
May 23 06:31:17.181: INFO: Got stdout from 3.96.148.74:22: Hello from ec2-user@ip-172-20-52-97.ca-central-1.compute.internal
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
May 23 06:31:18.311: INFO: Got stdout from 99.79.69.161:22: stdout
May 23 06:31:18.311: INFO: Got stderr from 99.79.69.161:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ec2-user@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [k8s.io] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:23.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-6591" for this suite.


• [SLOW TEST:10.696 seconds]
[k8s.io] [sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:23.576: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 81 lines ...
• [SLOW TEST:5.710 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":11,"skipped":31,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:06.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":5,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:25.392: INFO: Only supported for providers [openstack] (not aws)
... skipping 56 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":0,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:29:22.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 92 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:537
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:552
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":3,"skipped":0,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:44.826: INFO: >>> kubeConfig: /root/.kube/config
... skipping 50 lines ...
May 23 06:30:48.570: INFO: PersistentVolumeClaim csi-hostpathr7w77 found but phase is Pending instead of Bound.
May 23 06:30:50.607: INFO: PersistentVolumeClaim csi-hostpathr7w77 found but phase is Pending instead of Bound.
May 23 06:30:52.642: INFO: PersistentVolumeClaim csi-hostpathr7w77 found but phase is Pending instead of Bound.
May 23 06:30:54.675: INFO: PersistentVolumeClaim csi-hostpathr7w77 found and phase=Bound (8.172468283s)
STEP: Creating pod pod-subpath-test-dynamicpv-vpdg
STEP: Creating a pod to test subpath
May 23 06:30:54.779: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vpdg" in namespace "provisioning-5480" to be "Succeeded or Failed"
May 23 06:30:54.813: INFO: Pod "pod-subpath-test-dynamicpv-vpdg": Phase="Pending", Reason="", readiness=false. Elapsed: 33.677613ms
May 23 06:30:56.848: INFO: Pod "pod-subpath-test-dynamicpv-vpdg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068068886s
May 23 06:30:58.882: INFO: Pod "pod-subpath-test-dynamicpv-vpdg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102113119s
May 23 06:31:00.916: INFO: Pod "pod-subpath-test-dynamicpv-vpdg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136227798s
May 23 06:31:02.950: INFO: Pod "pod-subpath-test-dynamicpv-vpdg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.170513014s
STEP: Saw pod success
May 23 06:31:02.950: INFO: Pod "pod-subpath-test-dynamicpv-vpdg" satisfied condition "Succeeded or Failed"
May 23 06:31:02.984: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-vpdg container test-container-subpath-dynamicpv-vpdg: <nil>
STEP: delete the pod
May 23 06:31:03.078: INFO: Waiting for pod pod-subpath-test-dynamicpv-vpdg to disappear
May 23 06:31:03.118: INFO: Pod pod-subpath-test-dynamicpv-vpdg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vpdg
May 23 06:31:03.118: INFO: Deleting pod "pod-subpath-test-dynamicpv-vpdg" in namespace "provisioning-5480"
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:31.939: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 67 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":5,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:52.592: INFO: >>> kubeConfig: /root/.kube/config
... skipping 94 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":6,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:32.401: INFO: Only supported for providers [azure] (not aws)
... skipping 74 lines ...
• [SLOW TEST:13.092 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":13,"skipped":140,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:34.328: INFO: Driver local doesn't support ext3 -- skipping
... skipping 75 lines ...
• [SLOW TEST:16.775 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:36.070: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 133 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-958bd8e6-947c-4759-87f7-008fcfe1aaf9
STEP: Creating a pod to test consume configMaps
May 23 06:31:32.238: INFO: Waiting up to 5m0s for pod "pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e" in namespace "configmap-4937" to be "Succeeded or Failed"
May 23 06:31:32.272: INFO: Pod "pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e": Phase="Pending", Reason="", readiness=false. Elapsed: 33.657396ms
May 23 06:31:34.311: INFO: Pod "pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072955058s
May 23 06:31:36.347: INFO: Pod "pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109022847s
STEP: Saw pod success
May 23 06:31:36.347: INFO: Pod "pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e" satisfied condition "Succeeded or Failed"
May 23 06:31:36.381: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e container configmap-volume-test: <nil>
STEP: delete the pod
May 23 06:31:36.479: INFO: Waiting for pod pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e to disappear
May 23 06:31:36.512: INFO: Pod pod-configmaps-a282cec8-8c99-4e19-b10a-fff68ec1b19e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:36.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4937" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "services-3324" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:36.904: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 86 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 14 lines ...
STEP: Destroying namespace "services-6629" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":9,"skipped":48,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:37.429: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 115 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:752
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:805
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":6,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:40.919: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
&ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-5799 /apis/apps/v1/namespaces/deployment-5799/replicasets/webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 11441 3 2021-05-23 06:31:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 0c5d86d7-e822-403c-a5fc-00981b20469c 0xc002148267 0xc002148268}] []  [{kube-controller-manager Update apps/v1 2021-05-23 06:31:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5d86d7-e822-403c-a5fc-00981b20469c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0021482e8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May 23 06:31:40.861: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May 23 06:31:40.861: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7  deployment-5799 /apis/apps/v1/namespaces/deployment-5799/replicasets/webserver-deployment-dd94f59b7 255f639a-e6e8-494a-a969-1f7e5b8c0efd 11672 3 2021-05-23 06:31:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 0c5d86d7-e822-403c-a5fc-00981b20469c 0xc002148347 0xc002148348}] []  [{kube-controller-manager Update apps/v1 2021-05-23 06:31:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0c5d86d7-e822-403c-a5fc-00981b20469c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0021483b8 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:9,AvailableReplicas:9,Conditions:[]ReplicaSetCondition{},},}
May 23 06:31:40.902: INFO: Pod "webserver-deployment-795d758f88-5bmbd" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-5bmbd webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-5bmbd 9e4423b9-ffce-47d7-a9ba-badbdeef4772 11413 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002374be7 0xc002374be8}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.902: INFO: Pod "webserver-deployment-795d758f88-92bst" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-92bst webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-92bst 3fa245ad-b568-4a2d-85be-e2f76d82b665 11345 0 2021-05-23 06:31:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002374d10 0xc002374d11}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.250\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-57.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.57,PodIP:100.96.4.250,StartTime:2021-05-23 06:31:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.902: INFO: Pod "webserver-deployment-795d758f88-9whqt" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-9whqt webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-9whqt 641f3045-4c7e-4958-b698-0ea0d08a63d9 11411 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002374ed7 0xc002374ed8}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.903: INFO: Pod "webserver-deployment-795d758f88-dqnhp" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-dqnhp webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-dqnhp e3b0c7e0-65a9-42b8-bae7-bb5f7f5f53e2 11340 0 2021-05-23 06:31:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002375000 0xc002375001}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"100.96.4.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-57.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.57,PodIP:100.96.4.232,StartTime:2021-05-23 06:31:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:100.96.4.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.903: INFO: Pod "webserver-deployment-795d758f88-fvrxc" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-fvrxc webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-fvrxc fc6f0aec-2e83-41dd-ba4f-18289fd602da 11393 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc0023751c7 0xc0023751c8}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-97.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.903: INFO: Pod "webserver-deployment-795d758f88-j7xp9" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-j7xp9 webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-j7xp9 027289c7-be3f-45e8-8a51-46e1e859777d 11404 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc0023752f0 0xc0023752f1}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-36-181.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.903: INFO: Pod "webserver-deployment-795d758f88-jt285" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-jt285 webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-jt285 d6585a53-2d3a-47ce-9992-1b1986df9611 11421 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002375420 0xc002375421}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-36-181.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 3 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-mptwj webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-mptwj e2ac95d7-5900-4d69-9495-a08cc8f7f19c 11088 0 2021-05-23 06:31:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002375680 0xc002375681}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-23 06:31:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-132.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.132,PodIP:,StartTime:2021-05-23 06:31:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.904: INFO: Pod "webserver-deployment-795d758f88-rz4xs" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-rz4xs webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-rz4xs c2eecf90-4f4a-4320-a146-a7359a12c3b9 11347 0 2021-05-23 06:31:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002375817 0xc002375818}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-97.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.97,PodIP:,StartTime:2021-05-23 06:31:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.904: INFO: Pod "webserver-deployment-795d758f88-wdwhj" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-wdwhj webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-wdwhj b5c8f445-424a-41d7-86f3-124232b53451 11387 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc0023759d7 0xc0023759d8}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-97.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.904: INFO: Pod "webserver-deployment-795d758f88-wwqck" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-wwqck webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-wwqck c799f95e-1f89-4e07-850f-7f413d1a514e 11674 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002375b00 0xc002375b01}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-41-57.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.41.57,PodIP:,StartTime:2021-05-23 06:31:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = Error response from daemon: pull access denied for webserver, repository does not exist or may require 'docker login': denied: requested access to the resource is denied,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.904: INFO: Pod "webserver-deployment-795d758f88-wxz7q" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-wxz7q webserver-deployment-795d758f88- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-795d758f88-wxz7q 71ecb7d7-465a-45e2-8e55-6c0556313005 11609 0 2021-05-23 06:31:36 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 42da9b0a-8924-4841-aa3e-b06f9625af8d 0xc002375ca7 0xc002375ca8}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42da9b0a-8924-4841-aa3e-b06f9625af8d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-05-23 06:31:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-52-97.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.52.97,PodIP:,StartTime:2021-05-23 06:31:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.904: INFO: Pod "webserver-deployment-dd94f59b7-2jg5q" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2jg5q webserver-deployment-dd94f59b7- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-dd94f59b7-2jg5q ef2c705f-0860-4b73-8157-596ebeb8d609 11399 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 255f639a-e6e8-494a-a969-1f7e5b8c0efd 0xc002375e47 0xc002375e48}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255f639a-e6e8-494a-a969-1f7e5b8c0efd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-36-181.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May 23 06:31:40.904: INFO: Pod "webserver-deployment-dd94f59b7-5b7l6" is not available:
&Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5b7l6 webserver-deployment-dd94f59b7- deployment-5799 /api/v1/namespaces/deployment-5799/pods/webserver-deployment-dd94f59b7-5b7l6 71686db1-2981-45b9-8082-9b03406619bc 11439 0 2021-05-23 06:31:38 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 255f639a-e6e8-494a-a969-1f7e5b8c0efd 0xc002375f60 0xc002375f61}] []  [{kube-controller-manager Update v1 2021-05-23 06:31:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"255f639a-e6e8-494a-a969-1f7e5b8c0efd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qmq5s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qmq5s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qmq5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-36-181.ca-central-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-05-23 06:31:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 42 lines ...
• [SLOW TEST:16.028 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":12,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:41.008: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 95 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver gluster doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
... skipping 114 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 431 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":7,"skipped":30,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:42.180: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 72 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1570
------------------------------
... skipping 57 lines ...
• [SLOW TEST:10.701 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:26.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:43.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2400" for this suite.


• [SLOW TEST:16.858 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:235
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":7,"skipped":47,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":7,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:43.143: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 236 lines ...
May 23 06:31:33.929: INFO: Pod aws-client still exists
May 23 06:31:35.888: INFO: Waiting for pod aws-client to disappear
May 23 06:31:35.923: INFO: Pod aws-client still exists
May 23 06:31:37.888: INFO: Waiting for pod aws-client to disappear
May 23 06:31:37.923: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
May 23 06:31:38.214: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0d09414535beab08c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0d09414535beab08c is currently attached to i-0ec4cc948b7b1f9be
	status code: 400, request id: 31c212bb-3e90-4d81-838b-084c18fd1c15
May 23 06:31:43.476: INFO: Successfully deleted PD "aws://ca-central-1a/vol-0d09414535beab08c".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:43.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-882" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:43.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5140" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":8,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:43.682: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":29,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
May 23 06:30:41.573: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8562-aws-scsx58w
STEP: creating a claim
May 23 06:30:41.610: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-h8vj
STEP: Creating a pod to test subpath
May 23 06:30:41.718: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-h8vj" in namespace "provisioning-8562" to be "Succeeded or Failed"
May 23 06:30:41.755: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 37.396004ms
May 23 06:30:43.790: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072388652s
May 23 06:30:45.826: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107614988s
May 23 06:30:47.860: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142318438s
May 23 06:30:49.895: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177297129s
May 23 06:30:51.935: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217166002s
... skipping 8 lines ...
May 23 06:31:10.256: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 28.538006329s
May 23 06:31:12.291: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.572849541s
May 23 06:31:14.326: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 32.607820695s
May 23 06:31:16.362: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 34.643863675s
May 23 06:31:18.398: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.680038775s
STEP: Saw pod success
May 23 06:31:18.398: INFO: Pod "pod-subpath-test-dynamicpv-h8vj" satisfied condition "Succeeded or Failed"
May 23 06:31:18.433: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-h8vj container test-container-subpath-dynamicpv-h8vj: <nil>
STEP: delete the pod
May 23 06:31:18.515: INFO: Waiting for pod pod-subpath-test-dynamicpv-h8vj to disappear
May 23 06:31:18.553: INFO: Pod pod-subpath-test-dynamicpv-h8vj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-h8vj
May 23 06:31:18.553: INFO: Deleting pod "pod-subpath-test-dynamicpv-h8vj" in namespace "provisioning-8562"
STEP: Creating pod pod-subpath-test-dynamicpv-h8vj
STEP: Creating a pod to test subpath
May 23 06:31:18.630: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-h8vj" in namespace "provisioning-8562" to be "Succeeded or Failed"
May 23 06:31:18.665: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 34.305753ms
May 23 06:31:20.745: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114748598s
May 23 06:31:22.781: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150313857s
May 23 06:31:24.880: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249260677s
May 23 06:31:26.946: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315175648s
May 23 06:31:28.981: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.350899288s
May 23 06:31:31.016: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.385885109s
May 23 06:31:33.073: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.442868717s
May 23 06:31:35.109: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.478498575s
May 23 06:31:37.144: INFO: Pod "pod-subpath-test-dynamicpv-h8vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.513415s
STEP: Saw pod success
May 23 06:31:37.144: INFO: Pod "pod-subpath-test-dynamicpv-h8vj" satisfied condition "Succeeded or Failed"
May 23 06:31:37.179: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-subpath-test-dynamicpv-h8vj container test-container-subpath-dynamicpv-h8vj: <nil>
STEP: delete the pod
May 23 06:31:37.260: INFO: Waiting for pod pod-subpath-test-dynamicpv-h8vj to disappear
May 23 06:31:37.294: INFO: Pod pod-subpath-test-dynamicpv-h8vj no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-h8vj
May 23 06:31:37.294: INFO: Deleting pod "pod-subpath-test-dynamicpv-h8vj" in namespace "provisioning-8562"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":6,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:47.726: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212
May 23 06:31:42.395: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-8efc9e86-3399-4910-83f1-057bd563ace7" in namespace "security-context-test-3775" to be "Succeeded or Failed"
May 23 06:31:42.429: INFO: Pod "busybox-readonly-true-8efc9e86-3399-4910-83f1-057bd563ace7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.708906ms
May 23 06:31:44.463: INFO: Pod "busybox-readonly-true-8efc9e86-3399-4910-83f1-057bd563ace7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068492078s
May 23 06:31:46.509: INFO: Pod "busybox-readonly-true-8efc9e86-3399-4910-83f1-057bd563ace7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114382066s
May 23 06:31:48.546: INFO: Pod "busybox-readonly-true-8efc9e86-3399-4910-83f1-057bd563ace7": Phase="Failed", Reason="", readiness=false. Elapsed: 6.15120102s
May 23 06:31:48.546: INFO: Pod "busybox-readonly-true-8efc9e86-3399-4910-83f1-057bd563ace7" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:48.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3775" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:212
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:48.652: INFO: Driver local doesn't support ntfs -- skipping
... skipping 201 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:92
STEP: Creating a pod to test downward API volume plugin
May 23 06:31:47.985: INFO: Waiting up to 5m0s for pod "metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d" in namespace "projected-4103" to be "Succeeded or Failed"
May 23 06:31:48.023: INFO: Pod "metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.17634ms
May 23 06:31:50.063: INFO: Pod "metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078205671s
May 23 06:31:52.100: INFO: Pod "metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114386285s
STEP: Saw pod success
May 23 06:31:52.100: INFO: Pod "metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d" satisfied condition "Succeeded or Failed"
May 23 06:31:52.134: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d container client-container: <nil>
STEP: delete the pod
May 23 06:31:52.214: INFO: Waiting for pod metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d to disappear
May 23 06:31:52.249: INFO: Pod metadata-volume-2a774c83-0fc4-4519-9312-31e0a5d22a7d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:31:52.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4103" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":7,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:52.356: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:57.942: INFO: Driver windows-gcepd doesn't support ext4 -- skipping
... skipping 95 lines ...
• [SLOW TEST:23.704 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":14,"skipped":146,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:58.075: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 93 lines ...
• [SLOW TEST:20.807 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":10,"skipped":54,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:31:58.276: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 166 lines ...
• [SLOW TEST:16.760 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2030
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":4,"skipped":47,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:52
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 23 06:31:59.246: INFO: Waiting up to 5m0s for pod "pod-6093e646-f851-44e9-8856-0c93649b87fb" in namespace "emptydir-183" to be "Succeeded or Failed"
May 23 06:31:59.281: INFO: Pod "pod-6093e646-f851-44e9-8856-0c93649b87fb": Phase="Pending", Reason="", readiness=false. Elapsed: 34.242115ms
May 23 06:32:01.315: INFO: Pod "pod-6093e646-f851-44e9-8856-0c93649b87fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069083382s
STEP: Saw pod success
May 23 06:32:01.316: INFO: Pod "pod-6093e646-f851-44e9-8856-0c93649b87fb" satisfied condition "Succeeded or Failed"
May 23 06:32:01.350: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-6093e646-f851-44e9-8856-0c93649b87fb container test-container: <nil>
STEP: delete the pod
May 23 06:32:01.430: INFO: Waiting for pod pod-6093e646-f851-44e9-8856-0c93649b87fb to disappear
May 23 06:32:01.466: INFO: Pod pod-6093e646-f851-44e9-8856-0c93649b87fb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 29 lines ...
May 23 06:31:45.008: INFO: PersistentVolumeClaim pvc-tmg9w found but phase is Pending instead of Bound.
May 23 06:31:47.042: INFO: PersistentVolumeClaim pvc-tmg9w found and phase=Bound (14.321356724s)
May 23 06:31:47.042: INFO: Waiting up to 3m0s for PersistentVolume local-66ksg to have phase Bound
May 23 06:31:47.078: INFO: PersistentVolume local-66ksg found and phase=Bound (35.784078ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7ppv
STEP: Creating a pod to test subpath
May 23 06:31:47.190: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7ppv" in namespace "provisioning-333" to be "Succeeded or Failed"
May 23 06:31:47.224: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Pending", Reason="", readiness=false. Elapsed: 34.43554ms
May 23 06:31:49.261: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071084056s
May 23 06:31:51.294: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104635917s
May 23 06:31:53.328: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138684371s
May 23 06:31:55.362: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172794432s
May 23 06:31:57.396: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206713843s
May 23 06:31:59.433: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.242904862s
May 23 06:32:01.466: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.276737943s
STEP: Saw pod success
May 23 06:32:01.466: INFO: Pod "pod-subpath-test-preprovisionedpv-7ppv" satisfied condition "Succeeded or Failed"
May 23 06:32:01.500: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-7ppv container test-container-volume-preprovisionedpv-7ppv: <nil>
STEP: delete the pod
May 23 06:32:01.585: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7ppv to disappear
May 23 06:32:01.618: INFO: Pod pod-subpath-test-preprovisionedpv-7ppv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7ppv
May 23 06:32:01.618: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7ppv" in namespace "provisioning-333"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:02.201: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:31:58.291: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37" in namespace "projected-2038" to be "Succeeded or Failed"
May 23 06:31:58.325: INFO: Pod "downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37": Phase="Pending", Reason="", readiness=false. Elapsed: 33.892682ms
May 23 06:32:00.359: INFO: Pod "downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067795384s
May 23 06:32:02.401: INFO: Pod "downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10961958s
May 23 06:32:04.435: INFO: Pod "downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.143540895s
STEP: Saw pod success
May 23 06:32:04.435: INFO: Pod "downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37" satisfied condition "Succeeded or Failed"
May 23 06:32:04.469: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37 container client-container: <nil>
STEP: delete the pod
May 23 06:32:04.545: INFO: Waiting for pod downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37 to disappear
May 23 06:32:04.579: INFO: Pod downwardapi-volume-80995c33-6746-473b-a12e-486d2195ea37 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.568 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":147,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":9,"skipped":90,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:58.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:393
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":10,"skipped":90,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:08.541: INFO: Only supported for providers [vsphere] (not aws)
... skipping 56 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361

      Distro debian doesn't support ntfs -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:180
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:31:23.506: INFO: >>> kubeConfig: /root/.kube/config
... skipping 23 lines ...
May 23 06:31:44.183: INFO: PersistentVolumeClaim pvc-8rbqs found but phase is Pending instead of Bound.
May 23 06:31:46.248: INFO: PersistentVolumeClaim pvc-8rbqs found and phase=Bound (14.317427728s)
May 23 06:31:46.248: INFO: Waiting up to 3m0s for PersistentVolume local-pf8j2 to have phase Bound
May 23 06:31:46.296: INFO: PersistentVolume local-pf8j2 found and phase=Bound (47.726899ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-bzw7
STEP: Creating a pod to test subpath
May 23 06:31:46.429: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-bzw7" in namespace "provisioning-3889" to be "Succeeded or Failed"
May 23 06:31:46.472: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.248122ms
May 23 06:31:48.509: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079192361s
May 23 06:31:50.544: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114344133s
May 23 06:31:52.581: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151972371s
May 23 06:31:54.618: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188060632s
May 23 06:31:56.652: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.222702809s
May 23 06:31:58.687: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.257620026s
May 23 06:32:00.722: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.2925267s
May 23 06:32:02.757: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.327611943s
May 23 06:32:04.794: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.364431821s
May 23 06:32:06.836: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.406224517s
STEP: Saw pod success
May 23 06:32:06.836: INFO: Pod "pod-subpath-test-preprovisionedpv-bzw7" satisfied condition "Succeeded or Failed"
May 23 06:32:06.874: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-bzw7 container test-container-subpath-preprovisionedpv-bzw7: <nil>
STEP: delete the pod
May 23 06:32:06.963: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-bzw7 to disappear
May 23 06:32:06.998: INFO: Pod pod-subpath-test-preprovisionedpv-bzw7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-bzw7
May 23 06:32:06.998: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-bzw7" in namespace "provisioning-3889"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:361
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:08.610: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:08.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2658" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":61,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:376
May 23 06:31:41.256: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:31:41.290: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-88q4
STEP: Creating a pod to test subpath
May 23 06:31:41.330: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-88q4" in namespace "provisioning-7335" to be "Succeeded or Failed"
May 23 06:31:41.364: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 34.089523ms
May 23 06:31:43.402: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071670517s
May 23 06:31:45.436: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105541907s
May 23 06:31:47.470: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140443428s
May 23 06:31:49.539: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.209192006s
May 23 06:31:51.576: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.245643735s
... skipping 4 lines ...
May 23 06:32:01.746: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.415982708s
May 23 06:32:03.782: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.451876589s
May 23 06:32:05.816: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.485762664s
May 23 06:32:07.850: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.519580352s
May 23 06:32:09.883: INFO: Pod "pod-subpath-test-inlinevolume-88q4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.553415373s
STEP: Saw pod success
May 23 06:32:09.884: INFO: Pod "pod-subpath-test-inlinevolume-88q4" satisfied condition "Succeeded or Failed"
May 23 06:32:09.917: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-88q4 container test-container-subpath-inlinevolume-88q4: <nil>
STEP: delete the pod
May 23 06:32:10.004: INFO: Waiting for pod pod-subpath-test-inlinevolume-88q4 to disappear
May 23 06:32:10.039: INFO: Pod pod-subpath-test-inlinevolume-88q4 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-88q4
May 23 06:32:10.039: INFO: Deleting pod "pod-subpath-test-inlinevolume-88q4" in namespace "provisioning-7335"
... skipping 43 lines ...
May 23 06:31:29.915: INFO: PersistentVolumeClaim pvc-bn96z found but phase is Pending instead of Bound.
May 23 06:31:31.950: INFO: PersistentVolumeClaim pvc-bn96z found and phase=Bound (14.547756995s)
May 23 06:31:31.950: INFO: Waiting up to 3m0s for PersistentVolume local-mgg9s to have phase Bound
May 23 06:31:31.984: INFO: PersistentVolume local-mgg9s found and phase=Bound (34.400397ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tg7z
STEP: Creating a pod to test subpath
May 23 06:31:32.092: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tg7z" in namespace "provisioning-2653" to be "Succeeded or Failed"
May 23 06:31:32.127: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 34.646131ms
May 23 06:31:34.166: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073433879s
May 23 06:31:36.242: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150003742s
May 23 06:31:38.280: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188301271s
May 23 06:31:40.315: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223032124s
May 23 06:31:42.350: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25821599s
May 23 06:31:44.386: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.294368916s
May 23 06:31:46.434: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.342400522s
May 23 06:31:48.473: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.380643989s
STEP: Saw pod success
May 23 06:31:48.473: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z" satisfied condition "Succeeded or Failed"
May 23 06:31:48.513: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tg7z container test-container-subpath-preprovisionedpv-tg7z: <nil>
STEP: delete the pod
May 23 06:31:48.829: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tg7z to disappear
May 23 06:31:48.866: INFO: Pod pod-subpath-test-preprovisionedpv-tg7z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tg7z
May 23 06:31:48.866: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tg7z" in namespace "provisioning-2653"
STEP: Creating pod pod-subpath-test-preprovisionedpv-tg7z
STEP: Creating a pod to test subpath
May 23 06:31:48.946: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tg7z" in namespace "provisioning-2653" to be "Succeeded or Failed"
May 23 06:31:48.991: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 44.859324ms
May 23 06:31:51.029: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0836982s
May 23 06:31:53.067: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120776845s
May 23 06:31:55.102: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155807321s
May 23 06:31:57.136: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19034094s
May 23 06:31:59.171: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.225172813s
May 23 06:32:01.206: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.260068711s
May 23 06:32:03.241: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.29491313s
May 23 06:32:05.279: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 16.332999535s
May 23 06:32:07.313: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Pending", Reason="", readiness=false. Elapsed: 18.36769858s
May 23 06:32:09.349: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.402822176s
STEP: Saw pod success
May 23 06:32:09.349: INFO: Pod "pod-subpath-test-preprovisionedpv-tg7z" satisfied condition "Succeeded or Failed"
May 23 06:32:09.384: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tg7z container test-container-subpath-preprovisionedpv-tg7z: <nil>
STEP: delete the pod
May 23 06:32:09.468: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tg7z to disappear
May 23 06:32:09.503: INFO: Pod pod-subpath-test-preprovisionedpv-tg7z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tg7z
May 23 06:32:09.503: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tg7z" in namespace "provisioning-2653"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:391
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:10.383: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
May 23 06:32:04.886: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4" in namespace "downward-api-3277" to be "Succeeded or Failed"
May 23 06:32:04.919: INFO: Pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.480073ms
May 23 06:32:06.954: INFO: Pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067831577s
May 23 06:32:08.988: INFO: Pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102088902s
May 23 06:32:11.022: INFO: Pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136381803s
May 23 06:32:13.062: INFO: Pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175954245s
May 23 06:32:15.096: INFO: Pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210286765s
STEP: Saw pod success
May 23 06:32:15.096: INFO: Pod "downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4" satisfied condition "Succeeded or Failed"
May 23 06:32:15.134: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4 container client-container: <nil>
STEP: delete the pod
May 23 06:32:15.221: INFO: Waiting for pod downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4 to disappear
May 23 06:32:15.254: INFO: Pod downwardapi-volume-56e09c7f-8909-4647-944f-484a0a4291a4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.649 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":150,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:15.345: INFO: Driver hostPath doesn't support ntfs -- skipping
... skipping 23 lines ...
May 23 06:31:58.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
May 23 06:31:58.478: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:31:58.552: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1248" in namespace "provisioning-1248" to be "Succeeded or Failed"
May 23 06:31:58.586: INFO: Pod "hostpath-symlink-prep-provisioning-1248": Phase="Pending", Reason="", readiness=false. Elapsed: 33.974288ms
May 23 06:32:00.620: INFO: Pod "hostpath-symlink-prep-provisioning-1248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068034012s
May 23 06:32:02.654: INFO: Pod "hostpath-symlink-prep-provisioning-1248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101590916s
STEP: Saw pod success
May 23 06:32:02.654: INFO: Pod "hostpath-symlink-prep-provisioning-1248" satisfied condition "Succeeded or Failed"
May 23 06:32:02.654: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1248" in namespace "provisioning-1248"
May 23 06:32:02.694: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1248" to be fully deleted
May 23 06:32:02.729: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tcwm
STEP: Creating a pod to test subpath
May 23 06:32:02.765: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tcwm" in namespace "provisioning-1248" to be "Succeeded or Failed"
May 23 06:32:02.799: INFO: Pod "pod-subpath-test-inlinevolume-tcwm": Phase="Pending", Reason="", readiness=false. Elapsed: 33.867623ms
May 23 06:32:04.835: INFO: Pod "pod-subpath-test-inlinevolume-tcwm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070582972s
May 23 06:32:06.874: INFO: Pod "pod-subpath-test-inlinevolume-tcwm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108962198s
May 23 06:32:08.908: INFO: Pod "pod-subpath-test-inlinevolume-tcwm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143368429s
May 23 06:32:10.942: INFO: Pod "pod-subpath-test-inlinevolume-tcwm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177647752s
STEP: Saw pod success
May 23 06:32:10.942: INFO: Pod "pod-subpath-test-inlinevolume-tcwm" satisfied condition "Succeeded or Failed"
May 23 06:32:10.976: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-tcwm container test-container-volume-inlinevolume-tcwm: <nil>
STEP: delete the pod
May 23 06:32:11.060: INFO: Waiting for pod pod-subpath-test-inlinevolume-tcwm to disappear
May 23 06:32:11.093: INFO: Pod pod-subpath-test-inlinevolume-tcwm no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tcwm
May 23 06:32:11.093: INFO: Deleting pod "pod-subpath-test-inlinevolume-tcwm" in namespace "provisioning-1248"
STEP: Deleting pod
May 23 06:32:11.126: INFO: Deleting pod "pod-subpath-test-inlinevolume-tcwm" in namespace "provisioning-1248"
May 23 06:32:11.194: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1248" in namespace "provisioning-1248" to be "Succeeded or Failed"
May 23 06:32:11.227: INFO: Pod "hostpath-symlink-prep-provisioning-1248": Phase="Pending", Reason="", readiness=false. Elapsed: 33.586867ms
May 23 06:32:13.261: INFO: Pod "hostpath-symlink-prep-provisioning-1248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06747299s
May 23 06:32:15.296: INFO: Pod "hostpath-symlink-prep-provisioning-1248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102497117s
STEP: Saw pod success
May 23 06:32:15.296: INFO: Pod "hostpath-symlink-prep-provisioning-1248" satisfied condition "Succeeded or Failed"
May 23 06:32:15.296: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1248" in namespace "provisioning-1248"
May 23 06:32:15.335: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1248" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:15.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1248" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":11,"skipped":65,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:15.465: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 76 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:15.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5955" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":12,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:15.857: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:16.170: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 129 lines ...
• [SLOW TEST:8.143 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":11,"skipped":96,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:16.723: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151

      Driver csi-hostpath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:169
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":13,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:32:10.190: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
May 23 06:32:10.358: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:32:10.392: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-lfl7
STEP: Creating a pod to test subpath
May 23 06:32:10.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lfl7" in namespace "provisioning-6366" to be "Succeeded or Failed"
May 23 06:32:10.461: INFO: Pod "pod-subpath-test-inlinevolume-lfl7": Phase="Pending", Reason="", readiness=false. Elapsed: 33.266499ms
May 23 06:32:12.495: INFO: Pod "pod-subpath-test-inlinevolume-lfl7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067639777s
May 23 06:32:14.529: INFO: Pod "pod-subpath-test-inlinevolume-lfl7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101614065s
May 23 06:32:16.563: INFO: Pod "pod-subpath-test-inlinevolume-lfl7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.135606946s
STEP: Saw pod success
May 23 06:32:16.563: INFO: Pod "pod-subpath-test-inlinevolume-lfl7" satisfied condition "Succeeded or Failed"
May 23 06:32:16.597: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-lfl7 container test-container-volume-inlinevolume-lfl7: <nil>
STEP: delete the pod
May 23 06:32:16.699: INFO: Waiting for pod pod-subpath-test-inlinevolume-lfl7 to disappear
May 23 06:32:16.734: INFO: Pod pod-subpath-test-inlinevolume-lfl7 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-lfl7
May 23 06:32:16.734: INFO: Deleting pod "pod-subpath-test-inlinevolume-lfl7" in namespace "provisioning-6366"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":14,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:16.881: INFO: Driver hostPathSymlink doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 20 lines ...
May 23 06:32:09.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
May 23 06:32:09.219: INFO: Waiting up to 5m0s for pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012" in namespace "security-context-8886" to be "Succeeded or Failed"
May 23 06:32:09.254: INFO: Pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012": Phase="Pending", Reason="", readiness=false. Elapsed: 35.033291ms
May 23 06:32:11.288: INFO: Pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069834259s
May 23 06:32:13.323: INFO: Pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104424786s
May 23 06:32:15.358: INFO: Pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139001754s
May 23 06:32:17.393: INFO: Pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17421357s
May 23 06:32:19.428: INFO: Pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.208925164s
STEP: Saw pod success
May 23 06:32:19.428: INFO: Pod "security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012" satisfied condition "Succeeded or Failed"
May 23 06:32:19.462: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012 container test-container: <nil>
STEP: delete the pod
May 23 06:32:19.547: INFO: Waiting for pod security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012 to disappear
May 23 06:32:19.584: INFO: Pod security-context-3b86360c-9ed1-463f-a2a2-d776daa0f012 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.646 seconds]
[k8s.io] [sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":8,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 26 lines ...
May 23 06:32:14.586: INFO: PersistentVolumeClaim pvc-tjlv8 found but phase is Pending instead of Bound.
May 23 06:32:16.627: INFO: PersistentVolumeClaim pvc-tjlv8 found and phase=Bound (14.314559056s)
May 23 06:32:16.627: INFO: Waiting up to 3m0s for PersistentVolume local-stqgw to have phase Bound
May 23 06:32:16.665: INFO: PersistentVolume local-stqgw found and phase=Bound (37.994627ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-t4t6
STEP: Creating a pod to test subpath
May 23 06:32:16.775: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-t4t6" in namespace "provisioning-8535" to be "Succeeded or Failed"
May 23 06:32:16.809: INFO: Pod "pod-subpath-test-preprovisionedpv-t4t6": Phase="Pending", Reason="", readiness=false. Elapsed: 34.620834ms
May 23 06:32:18.847: INFO: Pod "pod-subpath-test-preprovisionedpv-t4t6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071848889s
May 23 06:32:20.883: INFO: Pod "pod-subpath-test-preprovisionedpv-t4t6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108039754s
STEP: Saw pod success
May 23 06:32:20.883: INFO: Pod "pod-subpath-test-preprovisionedpv-t4t6" satisfied condition "Succeeded or Failed"
May 23 06:32:20.919: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-t4t6 container test-container-volume-preprovisionedpv-t4t6: <nil>
STEP: delete the pod
May 23 06:32:21.019: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-t4t6 to disappear
May 23 06:32:21.053: INFO: Pod pod-subpath-test-preprovisionedpv-t4t6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-t4t6
May 23 06:32:21.054: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-t4t6" in namespace "provisioning-8535"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:191
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":52,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:30:00.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:499
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":5,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:23.029: INFO: Driver local doesn't support ext4 -- skipping
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592
  when scheduling a busybox Pod with hostAliases
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":97,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:23.171: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:23.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7273" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":9,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
May 23 06:31:30.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
May 23 06:31:30.643: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:31:30.726: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7814" in namespace "provisioning-7814" to be "Succeeded or Failed"
May 23 06:31:30.798: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 71.547136ms
May 23 06:31:32.847: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120745619s
May 23 06:31:34.884: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157209248s
May 23 06:31:36.918: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191399237s
May 23 06:31:38.970: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 8.243599438s
May 23 06:31:41.004: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 10.277858421s
May 23 06:31:43.039: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 12.312449774s
May 23 06:31:45.073: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.346897733s
STEP: Saw pod success
May 23 06:31:45.073: INFO: Pod "hostpath-symlink-prep-provisioning-7814" satisfied condition "Succeeded or Failed"
May 23 06:31:45.073: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7814" in namespace "provisioning-7814"
May 23 06:31:45.136: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7814" to be fully deleted
May 23 06:31:45.180: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-vsnp
STEP: Creating a pod to test atomic-volume-subpath
May 23 06:31:45.215: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vsnp" in namespace "provisioning-7814" to be "Succeeded or Failed"
May 23 06:31:45.249: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Pending", Reason="", readiness=false. Elapsed: 34.060353ms
May 23 06:31:47.284: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068670791s
May 23 06:31:49.320: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104345379s
May 23 06:31:51.355: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139371534s
May 23 06:31:53.389: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173890967s
May 23 06:31:55.424: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.208486765s
... skipping 8 lines ...
May 23 06:32:13.740: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Running", Reason="", readiness=true. Elapsed: 28.525207311s
May 23 06:32:15.775: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Running", Reason="", readiness=true. Elapsed: 30.560081122s
May 23 06:32:17.812: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Running", Reason="", readiness=true. Elapsed: 32.597179391s
May 23 06:32:19.847: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Running", Reason="", readiness=true. Elapsed: 34.631972371s
May 23 06:32:21.883: INFO: Pod "pod-subpath-test-inlinevolume-vsnp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.668161323s
STEP: Saw pod success
May 23 06:32:21.883: INFO: Pod "pod-subpath-test-inlinevolume-vsnp" satisfied condition "Succeeded or Failed"
May 23 06:32:21.918: INFO: Trying to get logs from node ip-172-20-36-181.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-vsnp container test-container-subpath-inlinevolume-vsnp: <nil>
STEP: delete the pod
May 23 06:32:21.999: INFO: Waiting for pod pod-subpath-test-inlinevolume-vsnp to disappear
May 23 06:32:22.033: INFO: Pod pod-subpath-test-inlinevolume-vsnp no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-vsnp
May 23 06:32:22.033: INFO: Deleting pod "pod-subpath-test-inlinevolume-vsnp" in namespace "provisioning-7814"
STEP: Deleting pod
May 23 06:32:22.066: INFO: Deleting pod "pod-subpath-test-inlinevolume-vsnp" in namespace "provisioning-7814"
May 23 06:32:22.141: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7814" in namespace "provisioning-7814" to be "Succeeded or Failed"
May 23 06:32:22.175: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Pending", Reason="", readiness=false. Elapsed: 33.868227ms
May 23 06:32:24.213: INFO: Pod "hostpath-symlink-prep-provisioning-7814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.071122699s
STEP: Saw pod success
May 23 06:32:24.213: INFO: Pod "hostpath-symlink-prep-provisioning-7814" satisfied condition "Succeeded or Failed"
May 23 06:32:24.213: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7814" in namespace "provisioning-7814"
May 23 06:32:24.254: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7814" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:24.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7814" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:227
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:24.376: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 107 lines ...
• [SLOW TEST:29.174 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":8,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:27.174: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 66 lines ...
• [SLOW TEST:10.819 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:27.331: INFO: Driver gluster doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-86b8ee11-8d0f-41a1-a78d-8ffaf33b4f43
STEP: Creating a pod to test consume configMaps
May 23 06:32:19.920: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307" in namespace "projected-2656" to be "Succeeded or Failed"
May 23 06:32:19.956: INFO: Pod "pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307": Phase="Pending", Reason="", readiness=false. Elapsed: 36.62082ms
May 23 06:32:21.991: INFO: Pod "pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071618255s
May 23 06:32:24.026: INFO: Pod "pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106436378s
May 23 06:32:26.061: INFO: Pod "pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141203371s
May 23 06:32:28.096: INFO: Pod "pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176279276s
STEP: Saw pod success
May 23 06:32:28.096: INFO: Pod "pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307" satisfied condition "Succeeded or Failed"
May 23 06:32:28.131: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307 container projected-configmap-volume-test: <nil>
STEP: delete the pod
May 23 06:32:28.264: INFO: Waiting for pod pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307 to disappear
May 23 06:32:28.299: INFO: Pod pod-projected-configmaps-e60cba85-e2cb-431d-bb83-3aeaa5ba1307 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.711 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:28.411: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 20 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:32:16.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:29.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5676" for this suite.


• [SLOW TEST:12.315 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:32:02.223: INFO: >>> kubeConfig: /root/.kube/config
... skipping 100 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:47
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:56
STEP: Creating a pod to test emptydir 0644 on tmpfs
May 23 06:32:23.493: INFO: Waiting up to 5m0s for pod "pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1" in namespace "emptydir-2639" to be "Succeeded or Failed"
May 23 06:32:23.529: INFO: Pod "pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.112778ms
May 23 06:32:25.563: INFO: Pod "pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069935998s
May 23 06:32:27.598: INFO: Pod "pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104695123s
May 23 06:32:29.633: INFO: Pod "pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139175605s
STEP: Saw pod success
May 23 06:32:29.633: INFO: Pod "pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1" satisfied condition "Succeeded or Failed"
May 23 06:32:29.667: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1 container test-container: <nil>
STEP: delete the pod
May 23 06:32:29.745: INFO: Waiting for pod pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1 to disappear
May 23 06:32:29.779: INFO: Pod pod-2fccff5a-4dca-4f59-be13-4b5fb118bbe1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45
    new files should be created with FSGroup ownership when container is non-root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:56
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":10,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:29.863: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 147 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":9,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:32.792: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 226 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:32:29.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:38.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5752" for this suite.


• [SLOW TEST:8.365 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:117
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":11,"skipped":60,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:38.297: INFO: Only supported for providers [azure] (not aws)
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:38.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9654" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":12,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:88
May 23 06:32:38.623: INFO: Driver "nfs" does not support topology - skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 4 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "nfs" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:97
------------------------------
... skipping 175 lines ...
May 23 06:31:38.449: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-278
May 23 06:31:38.485: INFO: creating *v1.StatefulSet: csi-mock-volumes-278-7023/csi-mockplugin-attacher
May 23 06:31:38.531: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-278"
May 23 06:31:38.580: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-278 to register on node ip-172-20-36-181.ca-central-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
May 23 06:32:07.201: INFO: Error getting logs for pod inline-volume-9sqgs: the server rejected our request for an unknown reason (get pods inline-volume-9sqgs)
May 23 06:32:07.201: INFO: Deleting pod "inline-volume-9sqgs" in namespace "csi-mock-volumes-278"
May 23 06:32:07.246: INFO: Wait up to 5m0s for pod "inline-volume-9sqgs" to be fully deleted
STEP: Deleting the previously created pod
May 23 06:32:13.315: INFO: Deleting pod "pvc-volume-tester-mmspv" in namespace "csi-mock-volumes-278"
May 23 06:32:13.352: INFO: Wait up to 5m0s for pod "pvc-volume-tester-mmspv" to be fully deleted
STEP: Checking CSI driver logs
May 23 06:32:25.461: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
May 23 06:32:25.461: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-mmspv
May 23 06:32:25.461: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-278
May 23 06:32:25.461: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 096f6c1c-9646-4ef6-a4ff-88ee241bfbbb
May 23 06:32:25.461: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
May 23 06:32:25.461: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-4cd1feb7481b385ac1082ee012119cebb4494d9b7a543dc8358cffaa64719c2d","target_path":"/var/lib/kubelet/pods/096f6c1c-9646-4ef6-a4ff-88ee241bfbbb/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-mmspv
May 23 06:32:25.461: INFO: Deleting pod "pvc-volume-tester-mmspv" in namespace "csi-mock-volumes-278"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-278
STEP: Waiting for namespaces [csi-mock-volumes-278] to vanish
STEP: uninstalling csi mock driver
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:308
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:358
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":6,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:38.774: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 200 lines ...
May 23 06:31:57.033: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-r7n8c] to have phase Bound
May 23 06:31:57.068: INFO: PersistentVolumeClaim pvc-r7n8c found and phase=Bound (35.739606ms)
STEP: Deleting the previously created pod
May 23 06:32:15.237: INFO: Deleting pod "pvc-volume-tester-cntt4" in namespace "csi-mock-volumes-3444"
May 23 06:32:15.273: INFO: Wait up to 5m0s for pod "pvc-volume-tester-cntt4" to be fully deleted
STEP: Checking CSI driver logs
May 23 06:32:19.381: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/cbe0c305-880e-4308-95d4-d9d115944500/volumes/kubernetes.io~csi/pvc-d0d6a6cf-4391-439c-8404-00147a9454c5/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-cntt4
May 23 06:32:19.381: INFO: Deleting pod "pvc-volume-tester-cntt4" in namespace "csi-mock-volumes-3444"
STEP: Deleting claim pvc-r7n8c
May 23 06:32:19.483: INFO: Waiting up to 2m0s for PersistentVolume pvc-d0d6a6cf-4391-439c-8404-00147a9454c5 to get deleted
May 23 06:32:19.516: INFO: PersistentVolume pvc-d0d6a6cf-4391-439c-8404-00147a9454c5 found and phase=Released (33.121286ms)
May 23 06:32:21.550: INFO: PersistentVolume pvc-d0d6a6cf-4391-439c-8404-00147a9454c5 found and phase=Released (2.067068811s)
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:308
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:358
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":6,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:42.962: INFO: Only supported for providers [gce gke] (not aws)
... skipping 54 lines ...
• [SLOW TEST:34.510 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a NodePort service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:119
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service","total":-1,"completed":11,"skipped":97,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:44.931: INFO: Only supported for providers [azure] (not aws)
... skipping 48 lines ...
• [SLOW TEST:21.817 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":13,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:45.031: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:39
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:167
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":7,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:45.543: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 31 lines ...
May 23 06:32:15.524: INFO: Creating resource for dynamic PV
May 23 06:32:15.524: INFO: Using claimSize:5Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-expand-5079-aws-scsz425
STEP: creating a claim
STEP: Expanding non-expandable pvc
May 23 06:32:15.632: INFO: currentPvcSize {{5368709120 0} {<nil>} 5Gi BinarySI}, newSize {{6442450944 0} {<nil>}  BinarySI}
May 23 06:32:15.702: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:17.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:19.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:21.772: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:23.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:25.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:27.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:29.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:31.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:33.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:35.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:37.773: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:39.770: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:41.778: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:43.773: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:45.771: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5079-aws-scsz425",
  	... // 2 identical fields
  }

May 23 06:32:45.838: INFO: Error updating pvc awsfnpfp: PersistentVolumeClaim "awsfnpfp" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:148
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":17,"skipped":152,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:46.029: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-1862e68b-f023-48d2-b4af-ce4c8d770a70
STEP: Creating a pod to test consume secrets
May 23 06:32:45.343: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2e2de8c0-81a5-43e4-8724-e84599c05e93" in namespace "projected-6247" to be "Succeeded or Failed"
May 23 06:32:45.378: INFO: Pod "pod-projected-secrets-2e2de8c0-81a5-43e4-8724-e84599c05e93": Phase="Pending", Reason="", readiness=false. Elapsed: 34.32382ms
May 23 06:32:47.413: INFO: Pod "pod-projected-secrets-2e2de8c0-81a5-43e4-8724-e84599c05e93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069138542s
STEP: Saw pod success
May 23 06:32:47.413: INFO: Pod "pod-projected-secrets-2e2de8c0-81a5-43e4-8724-e84599c05e93" satisfied condition "Succeeded or Failed"
May 23 06:32:47.447: INFO: Trying to get logs from node ip-172-20-52-97.ca-central-1.compute.internal pod pod-projected-secrets-2e2de8c0-81a5-43e4-8724-e84599c05e93 container projected-secret-volume-test: <nil>
STEP: delete the pod
May 23 06:32:47.529: INFO: Waiting for pod pod-projected-secrets-2e2de8c0-81a5-43e4-8724-e84599c05e93 to disappear
May 23 06:32:47.565: INFO: Pod pod-projected-secrets-2e2de8c0-81a5-43e4-8724-e84599c05e93 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
... skipping 11 lines ...
May 23 06:32:38.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
May 23 06:32:38.970: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
May 23 06:32:39.043: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7021" in namespace "provisioning-7021" to be "Succeeded or Failed"
May 23 06:32:39.077: INFO: Pod "hostpath-symlink-prep-provisioning-7021": Phase="Pending", Reason="", readiness=false. Elapsed: 34.26807ms
May 23 06:32:41.112: INFO: Pod "hostpath-symlink-prep-provisioning-7021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069063351s
STEP: Saw pod success
May 23 06:32:41.112: INFO: Pod "hostpath-symlink-prep-provisioning-7021" satisfied condition "Succeeded or Failed"
May 23 06:32:41.112: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7021" in namespace "provisioning-7021"
May 23 06:32:41.162: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7021" to be fully deleted
May 23 06:32:41.196: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fmrq
STEP: Creating a pod to test subpath
May 23 06:32:41.234: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fmrq" in namespace "provisioning-7021" to be "Succeeded or Failed"
May 23 06:32:41.269: INFO: Pod "pod-subpath-test-inlinevolume-fmrq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.307359ms
May 23 06:32:43.306: INFO: Pod "pod-subpath-test-inlinevolume-fmrq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07122471s
May 23 06:32:45.341: INFO: Pod "pod-subpath-test-inlinevolume-fmrq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107099207s
STEP: Saw pod success
May 23 06:32:45.341: INFO: Pod "pod-subpath-test-inlinevolume-fmrq" satisfied condition "Succeeded or Failed"
May 23 06:32:45.376: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-inlinevolume-fmrq container test-container-subpath-inlinevolume-fmrq: <nil>
STEP: delete the pod
May 23 06:32:45.466: INFO: Waiting for pod pod-subpath-test-inlinevolume-fmrq to disappear
May 23 06:32:45.500: INFO: Pod pod-subpath-test-inlinevolume-fmrq no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fmrq
May 23 06:32:45.500: INFO: Deleting pod "pod-subpath-test-inlinevolume-fmrq" in namespace "provisioning-7021"
STEP: Deleting pod
May 23 06:32:45.535: INFO: Deleting pod "pod-subpath-test-inlinevolume-fmrq" in namespace "provisioning-7021"
May 23 06:32:45.605: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7021" in namespace "provisioning-7021" to be "Succeeded or Failed"
May 23 06:32:45.639: INFO: Pod "hostpath-symlink-prep-provisioning-7021": Phase="Pending", Reason="", readiness=false. Elapsed: 34.244136ms
May 23 06:32:47.674: INFO: Pod "hostpath-symlink-prep-provisioning-7021": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.068959777s
STEP: Saw pod success
May 23 06:32:47.674: INFO: Pod "hostpath-symlink-prep-provisioning-7021" satisfied condition "Succeeded or Failed"
May 23 06:32:47.674: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7021" in namespace "provisioning-7021"
May 23 06:32:47.714: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7021" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:47.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7021" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 7 lines ...
May 23 06:32:24.586: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
May 23 06:32:25.040: INFO: Successfully created a new PD: "aws://ca-central-1a/vol-0efed1280bf49d16c".
May 23 06:32:25.040: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-ftxm
STEP: Creating a pod to test exec-volume-test
May 23 06:32:25.093: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-ftxm" in namespace "volume-6330" to be "Succeeded or Failed"
May 23 06:32:25.126: INFO: Pod "exec-volume-test-inlinevolume-ftxm": Phase="Pending", Reason="", readiness=false. Elapsed: 33.777327ms
May 23 06:32:27.161: INFO: Pod "exec-volume-test-inlinevolume-ftxm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067983931s
May 23 06:32:29.195: INFO: Pod "exec-volume-test-inlinevolume-ftxm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10216625s
May 23 06:32:31.229: INFO: Pod "exec-volume-test-inlinevolume-ftxm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136596511s
May 23 06:32:33.264: INFO: Pod "exec-volume-test-inlinevolume-ftxm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170892238s
May 23 06:32:35.298: INFO: Pod "exec-volume-test-inlinevolume-ftxm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205659372s
May 23 06:32:37.333: INFO: Pod "exec-volume-test-inlinevolume-ftxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.240064004s
STEP: Saw pod success
May 23 06:32:37.333: INFO: Pod "exec-volume-test-inlinevolume-ftxm" satisfied condition "Succeeded or Failed"
May 23 06:32:37.367: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod exec-volume-test-inlinevolume-ftxm container exec-container-inlinevolume-ftxm: <nil>
STEP: delete the pod
May 23 06:32:37.447: INFO: Waiting for pod exec-volume-test-inlinevolume-ftxm to disappear
May 23 06:32:37.481: INFO: Pod exec-volume-test-inlinevolume-ftxm no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-ftxm
May 23 06:32:37.481: INFO: Deleting pod "exec-volume-test-inlinevolume-ftxm" in namespace "volume-6330"
May 23 06:32:37.636: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0efed1280bf49d16c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0efed1280bf49d16c is currently attached to i-023893f69a5ba17ac
	status code: 400, request id: 7ffbb99f-804c-466c-b403-6293cc94b20d
May 23 06:32:42.925: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0efed1280bf49d16c", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0efed1280bf49d16c is currently attached to i-023893f69a5ba17ac
	status code: 400, request id: 5645229e-d775-4e9a-a178-51c00b0ffae6
May 23 06:32:48.192: INFO: Successfully deleted PD "aws://ca-central-1a/vol-0efed1280bf49d16c".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:48.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6330" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:48.284: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 127 lines ...
• [SLOW TEST:59.903 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:121
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":9,"skipped":54,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":13,"skipped":107,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:49.447: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:49.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7616" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":18,"skipped":154,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:32:49.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
STEP: Destroying namespace "services-650" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":19,"skipped":154,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
May 23 06:32:49.821: INFO: >>> kubeConfig: /root/.kube/config
... skipping 49 lines ...
May 23 06:32:44.276: INFO: PersistentVolumeClaim pvc-lbqkj found but phase is Pending instead of Bound.
May 23 06:32:46.309: INFO: PersistentVolumeClaim pvc-lbqkj found and phase=Bound (10.211768025s)
May 23 06:32:46.309: INFO: Waiting up to 3m0s for PersistentVolume local-vq5vp to have phase Bound
May 23 06:32:46.343: INFO: PersistentVolume local-vq5vp found and phase=Bound (33.114614ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6wbk
STEP: Creating a pod to test subpath
May 23 06:32:46.443: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6wbk" in namespace "provisioning-2305" to be "Succeeded or Failed"
May 23 06:32:46.477: INFO: Pod "pod-subpath-test-preprovisionedpv-6wbk": Phase="Pending", Reason="", readiness=false. Elapsed: 33.31605ms
May 23 06:32:48.510: INFO: Pod "pod-subpath-test-preprovisionedpv-6wbk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067045656s
May 23 06:32:50.544: INFO: Pod "pod-subpath-test-preprovisionedpv-6wbk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100776146s
STEP: Saw pod success
May 23 06:32:50.544: INFO: Pod "pod-subpath-test-preprovisionedpv-6wbk" satisfied condition "Succeeded or Failed"
May 23 06:32:50.579: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-6wbk container test-container-subpath-preprovisionedpv-6wbk: <nil>
STEP: delete the pod
May 23 06:32:50.655: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6wbk to disappear
May 23 06:32:50.689: INFO: Pod pod-subpath-test-preprovisionedpv-6wbk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6wbk
May 23 06:32:50.689: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6wbk" in namespace "provisioning-2305"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":55,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 26 lines ...
May 23 06:32:43.351: INFO: PersistentVolumeClaim pvc-jwvvw found but phase is Pending instead of Bound.
May 23 06:32:45.386: INFO: PersistentVolumeClaim pvc-jwvvw found and phase=Bound (14.284673749s)
May 23 06:32:45.386: INFO: Waiting up to 3m0s for PersistentVolume local-7cm7j to have phase Bound
May 23 06:32:45.421: INFO: PersistentVolume local-7cm7j found and phase=Bound (34.935052ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-stf5
STEP: Creating a pod to test subpath
May 23 06:32:45.536: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-stf5" in namespace "provisioning-7052" to be "Succeeded or Failed"
May 23 06:32:45.571: INFO: Pod "pod-subpath-test-preprovisionedpv-stf5": Phase="Pending", Reason="", readiness=false. Elapsed: 35.129244ms
May 23 06:32:47.608: INFO: Pod "pod-subpath-test-preprovisionedpv-stf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072034304s
May 23 06:32:49.644: INFO: Pod "pod-subpath-test-preprovisionedpv-stf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107371457s
STEP: Saw pod success
May 23 06:32:49.644: INFO: Pod "pod-subpath-test-preprovisionedpv-stf5" satisfied condition "Succeeded or Failed"
May 23 06:32:49.679: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-stf5 container test-container-subpath-preprovisionedpv-stf5: <nil>
STEP: delete the pod
May 23 06:32:49.759: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-stf5 to disappear
May 23 06:32:49.794: INFO: Pod pod-subpath-test-preprovisionedpv-stf5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-stf5
May 23 06:32:49.794: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-stf5" in namespace "provisioning-7052"
... skipping 28 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:51.287: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 430 lines ...
May 23 06:32:46.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 23 06:32:48.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
May 23 06:32:50.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63757348366, loc:(*time.Location)(0x770e980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May 23 06:32:53.448: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:53.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2760" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:8.482 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":8,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:54.045: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 84 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:382
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:434
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":14,"skipped":106,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] server version
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
May 23 06:32:55.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-1148" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":15,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:32:55.495: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 136 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should be able to unmount after the subpath directory is deleted
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:439
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":9,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:33:03.787: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 87 lines ...
May 23 06:32:58.251: INFO: PersistentVolumeClaim pvc-kwhfd found but phase is Pending instead of Bound.
May 23 06:33:00.287: INFO: PersistentVolumeClaim pvc-kwhfd found and phase=Bound (6.150499379s)
May 23 06:33:00.287: INFO: Waiting up to 3m0s for PersistentVolume local-mgrqd to have phase Bound
May 23 06:33:00.324: INFO: PersistentVolume local-mgrqd found and phase=Bound (37.227966ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-r66b
STEP: Creating a pod to test exec-volume-test
May 23 06:33:00.430: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-r66b" in namespace "volume-9519" to be "Succeeded or Failed"
May 23 06:33:00.465: INFO: Pod "exec-volume-test-preprovisionedpv-r66b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.918993ms
May 23 06:33:02.501: INFO: Pod "exec-volume-test-preprovisionedpv-r66b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070258368s
May 23 06:33:04.536: INFO: Pod "exec-volume-test-preprovisionedpv-r66b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105599227s
STEP: Saw pod success
May 23 06:33:04.536: INFO: Pod "exec-volume-test-preprovisionedpv-r66b" satisfied condition "Succeeded or Failed"
May 23 06:33:04.571: INFO: Trying to get logs from node ip-172-20-52-132.ca-central-1.compute.internal pod exec-volume-test-preprovisionedpv-r66b container exec-container-preprovisionedpv-r66b: <nil>
STEP: delete the pod
May 23 06:33:04.650: INFO: Waiting for pod exec-volume-test-preprovisionedpv-r66b to disappear
May 23 06:33:04.685: INFO: Pod exec-volume-test-preprovisionedpv-r66b no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-r66b
May 23 06:33:04.685: INFO: Deleting pod "exec-volume-test-preprovisionedpv-r66b" in namespace "volume-9519"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:57
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:128
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:129
May 23 06:33:05.546: INFO: Driver azure-disk doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175

... skipping 161 lines ...
May 23 06:32:28.608: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
May 23 06:32:29.017: INFO: Successfully created a new PD: "aws://ca-central-1a/vol-0fa17ad517512d6a7".
May 23 06:32:29.017: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-c44h
STEP: Creating a pod to test exec-volume-test
May 23 06:32:29.053: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-c44h" in namespace "volume-9320" to be "Succeeded or Failed"
May 23 06:32:29.088: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 34.274686ms
May 23 06:32:31.124: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070259764s
May 23 06:32:33.159: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105062102s
May 23 06:32:35.193: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139991842s
May 23 06:32:37.229: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175147971s
May 23 06:32:39.263: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 10.209991916s
May 23 06:32:41.298: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 12.244959715s
May 23 06:32:43.334: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 14.280916403s
May 23 06:32:45.369: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 16.315678463s
May 23 06:32:47.404: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Pending", Reason="", readiness=false. Elapsed: 18.350491733s
May 23 06:32:49.439: INFO: Pod "exec-volume-test-inlinevolume-c44h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.385487707s
STEP: Saw pod success
May 23 06:32:49.439: INFO: Pod "exec-volume-test-inlinevolume-c44h" satisfied condition "Succeeded or Failed"
May 23 06:32:49.473: INFO: Trying to get logs from node ip-172-20-41-57.ca-central-1.compute.internal pod exec-volume-test-inlinevolume-c44h container exec-container-inlinevolume-c44h: <nil>
STEP: delete the pod
May 23 06:32:49.557: INFO: Waiting for pod exec-volume-test-inlinevolume-c44h to disappear
May 23 06:32:49.592: INFO: Pod exec-volume-test-inlinevolume-c44h no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-c44h
May 23 06:32:49.592: INFO: Deleting pod "exec-volume-test-inlinevolume-c44h" in namespace "volume-9320"
May 23 06:32:49.767: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0fa17ad517512d6a7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0fa17ad517512d6a7 is currently attached to i-00a61631f958158b2
	status code: 400, request id: 703136f0-c796-472e-a230-433b4e6c7984
May 23 06:32:55.035: INFO: Couldn't delete PD "aws://ca-central-1a/vol-0fa17ad517512d6a7", sleeping 5s: error deleting EBS vol






... skipping 44931 lines ...






3-fea7-4932-bdb1-6870a2880b4d kind=\"CiliumEndpoint\"\nI0523 06:30:25.438439       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-9453/sample-webhook-deployment-cbccbf6bb-hct7d\" objectUID=211a9843-fea7-4932-bdb1-6870a2880b4d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:30:26.038176       1 namespace_controller.go:185] Namespace has been deleted kubectl-8752\nE0523 06:30:26.138605       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-3041/default: secrets \"default-token-7q88n\" is forbidden: unable to create new content in namespace kubectl-3041 because it is being terminated\nI0523 06:30:26.170428       1 pvc_protection_controller.go:291] PVC provisioning-320/awsl5n9z is unused\nI0523 06:30:26.215913       1 pv_controller.go:633] volume \"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:30:26.225258       1 pv_controller.go:859] volume \"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" entered phase \"Released\"\nI0523 06:30:26.231584       1 pv_controller.go:1321] isVolumeReleased[pvc-f4b60888-ec8d-4e9c-a202-ea999f577415]: volume is released\nI0523 06:30:26.349147       1 aws_util.go:62] Error deleting EBS Disk volume aws://ca-central-1a/vol-0530e6a5e20d73550: error deleting EBS volume \"vol-0530e6a5e20d73550\" since volume is currently attached to \"i-00a61631f958158b2\"\nE0523 06:30:26.349208       1 goroutinemap.go:150] Operation for \"delete-pvc-f4b60888-ec8d-4e9c-a202-ea999f577415[c5096426-70a1-457d-8aef-b2f685ea6243]\" failed. No retries permitted until 2021-05-23 06:30:26.849188253 +0000 UTC m=+406.961581937 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0530e6a5e20d73550\\\" since volume is currently attached to \\\"i-00a61631f958158b2\\\"\"\nI0523 06:30:26.349404       1 event.go:291] \"Event occurred\" object=\"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0530e6a5e20d73550\\\" since volume is currently attached to \\\"i-00a61631f958158b2\\\"\"\nI0523 06:30:27.088272       1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-4614/test-quota\nI0523 06:30:27.097540       1 namespace_controller.go:185] Namespace has been deleted nettest-7190\nI0523 06:30:27.352715       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0530e6a5e20d73550\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:27.355411       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0530e6a5e20d73550\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nE0523 06:30:27.485832       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-7123/default: secrets \"default-token-vlf7d\" is forbidden: unable to create new content in namespace kubectl-7123 because it is being terminated\nE0523 06:30:27.612505       1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-2182/default: secrets \"default-token-gpj5n\" is forbidden: unable to create new content in namespace downward-api-2182 because it is being terminated\nI0523 06:30:27.759416       1 garbagecollector.go:404] \"Processing object\" object=\"pods-7017/pod-submit-status-1-7\" objectUID=0c03e331-2f04-46a1-a0df-8791d4d2360a kind=\"Pod\"\nI0523 06:30:27.766168       1 garbagecollector.go:404] \"Processing object\" object=\"pods-7017/pod-submit-status-1-7\" objectUID=530c0664-9e1f-4c35-83fe-52dd2b5b870a kind=\"CiliumEndpoint\"\nI0523 06:30:27.768179       1 garbagecollector.go:519] \"Deleting object\" object=\"pods-7017/pod-submit-status-1-7\" objectUID=530c0664-9e1f-4c35-83fe-52dd2b5b870a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:30:27.875574       1 namespace_controller.go:185] Namespace has been deleted provisioning-5235\nE0523 06:30:28.252168       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-8646/pvc-jdql9: storageclass.storage.k8s.io \"volume-8646\" not found\nI0523 06:30:28.252461       1 event.go:291] \"Event occurred\" object=\"volume-8646/pvc-jdql9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8646\\\" not found\"\nI0523 06:30:28.263949       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:28.266120       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:28.293433       1 pv_controller.go:859] volume \"local-8fdrs\" entered phase \"Available\"\nI0523 06:30:28.393780       1 namespace_controller.go:185] Namespace has been deleted provisioning-33\nI0523 06:30:28.611841       1 pv_controller.go:859] volume \"local-pvzdbxz\" entered phase \"Available\"\nI0523 06:30:28.643751       1 pv_controller.go:910] claim \"persistent-local-volumes-test-9726/pvc-wjrg9\" bound to volume \"local-pvzdbxz\"\nI0523 06:30:28.649268       1 pv_controller.go:859] volume \"local-pvzdbxz\" entered phase \"Bound\"\nI0523 06:30:28.649293       1 pv_controller.go:962] volume \"local-pvzdbxz\" bound to claim \"persistent-local-volumes-test-9726/pvc-wjrg9\"\nI0523 06:30:28.653730       1 pv_controller.go:803] claim \"persistent-local-volumes-test-9726/pvc-wjrg9\" entered phase \"Bound\"\nI0523 06:30:28.733524       1 namespace_controller.go:185] Namespace has been deleted provisioning-8444\nI0523 06:30:28.821362       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-9726/pvc-wjrg9 is unused\nI0523 06:30:28.827749       1 pv_controller.go:633] volume \"local-pvzdbxz\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:28.830047       1 pv_controller.go:859] volume \"local-pvzdbxz\" entered phase \"Released\"\nI0523 06:30:28.855367       1 pv_controller_base.go:500] deletion of claim \"persistent-local-volumes-test-9726/pvc-wjrg9\" was already processed\nE0523 06:30:28.858567       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-7363/pvc-sxgvl: storageclass.storage.k8s.io \"provisioning-7363\" not found\nI0523 06:30:28.858603       1 event.go:291] \"Event occurred\" object=\"provisioning-7363/pvc-sxgvl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7363\\\" not found\"\nI0523 06:30:28.895690       1 pv_controller.go:859] volume \"local-tscgx\" entered phase \"Available\"\nI0523 06:30:29.158093       1 namespace_controller.go:185] Namespace has been deleted volume-9693\nI0523 06:30:29.271889       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-vhqr2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-09365bef81820a8cf\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:30:29.273698       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"aws-vhqr2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-09365bef81820a8cf\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nE0523 06:30:29.701937       1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-9129/default: secrets \"default-token-k8glw\" is forbidden: unable to create new content in namespace downward-api-9129 because it is being terminated\nI0523 06:30:30.084043       1 pv_controller.go:910] claim \"provisioning-896/pvc-vqmjr\" bound to volume \"local-rgjkr\"\nI0523 06:30:30.086563       1 pv_controller.go:1321] isVolumeReleased[pvc-f4b60888-ec8d-4e9c-a202-ea999f577415]: volume is released\nI0523 06:30:30.090919       1 pv_controller.go:859] volume \"local-rgjkr\" entered phase \"Bound\"\nI0523 06:30:30.090943       1 pv_controller.go:962] volume \"local-rgjkr\" bound to claim \"provisioning-896/pvc-vqmjr\"\nI0523 06:30:30.111516       1 pv_controller.go:803] claim \"provisioning-896/pvc-vqmjr\" entered phase \"Bound\"\nI0523 06:30:30.111599       1 pv_controller.go:910] claim \"volume-8646/pvc-jdql9\" bound to volume \"local-8fdrs\"\nI0523 06:30:30.122841       1 pv_controller.go:859] volume \"local-8fdrs\" entered phase \"Bound\"\nI0523 06:30:30.122864       1 pv_controller.go:962] volume \"local-8fdrs\" bound to claim \"volume-8646/pvc-jdql9\"\nI0523 06:30:30.128982       1 pv_controller.go:803] claim \"volume-8646/pvc-jdql9\" entered phase \"Bound\"\nI0523 06:30:30.129266       1 pv_controller.go:910] claim \"pv-5021/pvc-7ds8t\" bound to volume \"nfs-bc2zc\"\nI0523 06:30:30.129396       1 event.go:291] \"Event occurred\" object=\"volume-expand-7131/awsw5fgn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0523 06:30:30.138696       1 pv_controller.go:859] volume \"nfs-bc2zc\" entered phase \"Bound\"\nI0523 06:30:30.138719       1 pv_controller.go:962] volume \"nfs-bc2zc\" bound to claim \"pv-5021/pvc-7ds8t\"\nI0523 06:30:30.150691       1 pv_controller.go:803] claim \"pv-5021/pvc-7ds8t\" entered phase \"Bound\"\nI0523 06:30:30.150758       1 pv_controller.go:910] claim \"provisioning-7363/pvc-sxgvl\" bound to volume \"local-tscgx\"\nI0523 06:30:30.157151       1 pv_controller.go:859] volume \"local-tscgx\" entered phase \"Bound\"\nI0523 06:30:30.157174       1 pv_controller.go:962] volume \"local-tscgx\" bound to claim \"provisioning-7363/pvc-sxgvl\"\nI0523 06:30:30.165892       1 pv_controller.go:803] claim \"provisioning-7363/pvc-sxgvl\" entered phase \"Bound\"\nI0523 06:30:30.210105       1 aws_util.go:62] Error deleting EBS Disk volume aws://ca-central-1a/vol-0530e6a5e20d73550: error deleting EBS volume \"vol-0530e6a5e20d73550\" since volume is currently attached to \"i-00a61631f958158b2\"\nE0523 06:30:30.210155       1 goroutinemap.go:150] Operation for \"delete-pvc-f4b60888-ec8d-4e9c-a202-ea999f577415[c5096426-70a1-457d-8aef-b2f685ea6243]\" failed. No retries permitted until 2021-05-23 06:30:31.210137345 +0000 UTC m=+411.322531028 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0530e6a5e20d73550\\\" since volume is currently attached to \\\"i-00a61631f958158b2\\\"\"\nI0523 06:30:30.210430       1 event.go:291] \"Event occurred\" object=\"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0530e6a5e20d73550\\\" since volume is currently attached to \\\"i-00a61631f958158b2\\\"\"\nI0523 06:30:30.254318       1 namespace_controller.go:185] Namespace has been deleted container-probe-464\nI0523 06:30:30.310646       1 pvc_protection_controller.go:291] PVC volume-4475/pvc-fcfvs is unused\nI0523 06:30:30.343186       1 pv_controller.go:633] volume \"aws-vhqr2\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:30.353575       1 namespace_controller.go:185] Namespace has been deleted kubectl-7150\nI0523 06:30:30.355556       1 pv_controller.go:859] volume \"aws-vhqr2\" entered phase \"Released\"\nI0523 06:30:30.379249       1 pv_controller_base.go:500] deletion of claim \"volume-4475/pvc-fcfvs\" was already processed\nE0523 06:30:30.402594       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-9453/default: secrets \"default-token-zsk2q\" is forbidden: unable to create new content in namespace webhook-9453 because it is being terminated\nE0523 06:30:30.440215       1 tokens_controller.go:261] error synchronizing serviceaccount volume-2105/default: secrets \"default-token-hfdmk\" is forbidden: unable to create new content in namespace volume-2105 because it is being terminated\nI0523 06:30:30.490256       1 namespace_controller.go:185] Namespace has been deleted provisioning-3210\nI0523 06:30:31.246097       1 namespace_controller.go:185] Namespace has been deleted kubectl-3041\nE0523 06:30:31.310133       1 namespace_controller.go:162] deletion of namespace kubectl-6488 failed: unexpected items still remain in namespace: kubectl-6488 for gvr: /v1, Resource=pods\nI0523 06:30:31.643256       1 namespace_controller.go:185] Namespace has been deleted projected-4172\nI0523 06:30:31.777467       1 pv_controller.go:859] volume \"local-pv8vzpc\" entered phase \"Available\"\nI0523 06:30:31.809519       1 pv_controller.go:910] claim \"persistent-local-volumes-test-7423/pvc-jsvk2\" bound to volume \"local-pv8vzpc\"\nI0523 06:30:31.816906       1 pv_controller.go:859] volume \"local-pv8vzpc\" entered phase \"Bound\"\nI0523 06:30:31.816934       1 pv_controller.go:962] volume \"local-pv8vzpc\" bound to claim \"persistent-local-volumes-test-7423/pvc-jsvk2\"\nI0523 06:30:31.824453       1 pv_controller.go:803] claim \"persistent-local-volumes-test-7423/pvc-jsvk2\" entered phase \"Bound\"\nI0523 06:30:32.180768       1 namespace_controller.go:185] Namespace has been deleted resourcequota-4614\nI0523 06:30:32.196287       1 pv_controller.go:859] volume \"local-pvj6b6m\" entered phase \"Available\"\nI0523 06:30:32.227726       1 pv_controller.go:910] claim \"persistent-local-volumes-test-1657/pvc-qw69n\" bound to volume \"local-pvj6b6m\"\nI0523 06:30:32.233300       1 pv_controller.go:859] volume \"local-pvj6b6m\" entered phase \"Bound\"\nI0523 06:30:32.233326       1 pv_controller.go:962] volume \"local-pvj6b6m\" bound to claim \"persistent-local-volumes-test-1657/pvc-qw69n\"\nI0523 06:30:32.237767       1 pv_controller.go:803] claim \"persistent-local-volumes-test-1657/pvc-qw69n\" entered phase \"Bound\"\nI0523 06:30:32.617859       1 namespace_controller.go:185] Namespace has been deleted kubectl-7123\nI0523 06:30:32.654552       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0530e6a5e20d73550\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:32.719056       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-981-6724/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0523 06:30:32.735342       1 namespace_controller.go:185] Namespace has been deleted downward-api-2182\nI0523 06:30:33.067821       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/agnhost-replica-szp8f\" objectUID=31a88bcc-efee-4e75-be76-060368f19aa1 kind=\"EndpointSlice\"\nI0523 06:30:33.073124       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/agnhost-replica-szp8f\" objectUID=31a88bcc-efee-4e75-be76-060368f19aa1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:30:33.311510       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/agnhost-primary-wq6cz\" objectUID=f5bd758c-118a-4a51-b019-ab53a30dfd89 kind=\"EndpointSlice\"\nI0523 06:30:33.314003       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/agnhost-primary-wq6cz\" objectUID=f5bd758c-118a-4a51-b019-ab53a30dfd89 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:30:33.507843       1 namespace_controller.go:185] Namespace has been deleted provisioning-2708\nI0523 06:30:33.553725       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/frontend-q8qrd\" objectUID=c8ee300d-f2ea-4330-a7f2-3fe989cd5142 kind=\"EndpointSlice\"\nI0523 06:30:33.559226       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/frontend-q8qrd\" objectUID=c8ee300d-f2ea-4330-a7f2-3fe989cd5142 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:30:33.671621       1 aws.go:2275] Waiting for volume \"vol-0aeedb9e5743388a2\" state: actual=detaching, desired=detached\nI0523 06:30:33.810210       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/frontend-58d458fdbd\" objectUID=a1ea9b0d-04dc-47b0-983e-ad6b8cb07f9f kind=\"ReplicaSet\"\nI0523 06:30:33.810424       1 deployment_controller.go:581] Deployment kubectl-1288/frontend has been deleted\nI0523 06:30:33.812676       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/frontend-58d458fdbd\" objectUID=a1ea9b0d-04dc-47b0-983e-ad6b8cb07f9f kind=\"ReplicaSet\" propagationPolicy=Background\nI0523 06:30:33.815365       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/frontend-58d458fdbd-g58zg\" objectUID=0cd37d24-0652-4c01-9da5-cf20a8d62ddd kind=\"Pod\"\nI0523 06:30:33.815638       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/frontend-58d458fdbd-6x675\" objectUID=74ba8e54-ddb9-4bdd-bc35-b3444246de9a kind=\"Pod\"\nI0523 06:30:33.815775       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/frontend-58d458fdbd-77glk\" objectUID=dbc3f73d-6dd0-4564-9fa4-219c47233740 kind=\"Pod\"\nI0523 06:30:33.818205       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/frontend-58d458fdbd-77glk\" objectUID=dbc3f73d-6dd0-4564-9fa4-219c47233740 kind=\"Pod\" propagationPolicy=Background\nI0523 06:30:33.818603       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/frontend-58d458fdbd-6x675\" objectUID=74ba8e54-ddb9-4bdd-bc35-b3444246de9a kind=\"Pod\" propagationPolicy=Background\nI0523 06:30:33.818787       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/frontend-58d458fdbd-g58zg\" objectUID=0cd37d24-0652-4c01-9da5-cf20a8d62ddd kind=\"Pod\" propagationPolicy=Background\nI0523 06:30:34.058225       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/agnhost-primary-76f75c9b74\" objectUID=c1e2f26f-8187-4182-b114-e3f01fe5f4fc kind=\"ReplicaSet\"\nI0523 06:30:34.058470       1 deployment_controller.go:581] Deployment kubectl-1288/agnhost-primary has been deleted\nI0523 06:30:34.073358       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/agnhost-primary-76f75c9b74\" objectUID=c1e2f26f-8187-4182-b114-e3f01fe5f4fc kind=\"ReplicaSet\" propagationPolicy=Background\nI0523 06:30:34.131954       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/agnhost-primary-76f75c9b74-4gqnx\" objectUID=98e30e3f-da4d-4173-a574-9c8048a99a10 kind=\"Pod\"\nI0523 06:30:34.141638       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/agnhost-primary-76f75c9b74-4gqnx\" objectUID=98e30e3f-da4d-4173-a574-9c8048a99a10 kind=\"Pod\" propagationPolicy=Background\nI0523 06:30:34.314639       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/agnhost-replica-7d6489798\" objectUID=bcaea62f-0905-48c0-a918-01602996b942 kind=\"ReplicaSet\"\nI0523 06:30:34.314660       1 deployment_controller.go:581] Deployment kubectl-1288/agnhost-replica has been deleted\nI0523 06:30:34.316016       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/agnhost-replica-7d6489798\" objectUID=bcaea62f-0905-48c0-a918-01602996b942 kind=\"ReplicaSet\" propagationPolicy=Background\nI0523 06:30:34.320283       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/agnhost-replica-7d6489798-g5rjf\" objectUID=d56eabc4-0166-42bb-8f4f-41e8106c12a8 kind=\"Pod\"\nI0523 06:30:34.320537       1 garbagecollector.go:404] \"Processing object\" object=\"kubectl-1288/agnhost-replica-7d6489798-zg6sd\" objectUID=791e84c2-15de-4385-88b4-7662dd23787d kind=\"Pod\"\nI0523 06:30:34.323736       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/agnhost-replica-7d6489798-g5rjf\" objectUID=d56eabc4-0166-42bb-8f4f-41e8106c12a8 kind=\"Pod\" propagationPolicy=Background\nI0523 06:30:34.323918       1 garbagecollector.go:519] \"Deleting object\" object=\"kubectl-1288/agnhost-replica-7d6489798-zg6sd\" objectUID=791e84c2-15de-4385-88b4-7662dd23787d kind=\"Pod\" propagationPolicy=Background\nI0523 06:30:34.605227       1 aws.go:2275] Waiting for volume \"vol-09365bef81820a8cf\" state: actual=detaching, desired=detached\nI0523 06:30:34.767290       1 namespace_controller.go:185] Namespace has been deleted downward-api-9129\nI0523 06:30:35.221926       1 namespace_controller.go:185] Namespace has been deleted kubectl-3179\nI0523 06:30:35.641833       1 namespace_controller.go:185] Namespace has been deleted webhook-9453-markers\nI0523 06:30:35.704728       1 namespace_controller.go:185] Namespace has been deleted volume-2105\nI0523 06:30:35.732039       1 namespace_controller.go:185] Namespace has been deleted provisioning-5235-325\nI0523 06:30:35.757335       1 aws.go:2275] Waiting for volume \"vol-0aeedb9e5743388a2\" state: actual=detaching, desired=detached\nI0523 06:30:35.813946       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0b180b48da07df0c3\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:30:35.817475       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0b180b48da07df0c3\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:30:35.907352       1 event.go:291] \"Event occurred\" object=\"provisioning-6143-3657/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0523 06:30:36.005767       1 event.go:291] \"Event occurred\" object=\"provisioning-6143-3657/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0523 06:30:36.042918       1 pvc_protection_controller.go:291] PVC volume-6414/awslqzmt is unused\nI0523 06:30:36.048005       1 pv_controller.go:633] volume \"pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:30:36.050779       1 pv_controller.go:859] volume \"pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0\" entered phase \"Released\"\nI0523 06:30:36.051939       1 pv_controller.go:1321] isVolumeReleased[pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0]: volume is released\nI0523 06:30:36.089351       1 event.go:291] \"Event occurred\" object=\"provisioning-6143-3657/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0523 06:30:36.165002       1 event.go:291] \"Event occurred\" object=\"provisioning-6143-3657/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0523 06:30:36.197809       1 aws_util.go:62] Error deleting EBS Disk volume aws://ca-central-1a/vol-0b180b48da07df0c3: error deleting EBS volume \"vol-0b180b48da07df0c3\" since volume is currently attached to \"i-0ec4cc948b7b1f9be\"\nE0523 06:30:36.197879       1 goroutinemap.go:150] Operation for \"delete-pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0[14eabd62-3a72-4670-b8c6-ad1dcaec9bff]\" failed. No retries permitted until 2021-05-23 06:30:36.697856571 +0000 UTC m=+416.810250257 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0b180b48da07df0c3\\\" since volume is currently attached to \\\"i-0ec4cc948b7b1f9be\\\"\"\nI0523 06:30:36.197908       1 event.go:291] \"Event occurred\" object=\"pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0b180b48da07df0c3\\\" since volume is currently attached to \\\"i-0ec4cc948b7b1f9be\\\"\"\nI0523 06:30:36.252046       1 garbagecollector.go:199] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-557-crds], removed: []\nI0523 06:30:36.274466       1 event.go:291] \"Event occurred\" object=\"provisioning-6143-3657/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0523 06:30:36.425822       1 event.go:291] \"Event occurred\" object=\"provisioning-6143/csi-hostpathh67lk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-6143\\\" or manually created by system administrator\"\nI0523 06:30:36.479431       1 resource_quota_controller.go:434] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-557-crds], removed: []\nI0523 06:30:36.480269       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-557-crds.crd-publish-openapi-test-common-group.example.com\nI0523 06:30:36.480351       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0523 06:30:36.480556       1 reflector.go:207] Starting reflector *v1.PartialObjectMetadata (16h35m58.358995344s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90\nI0523 06:30:36.580513       1 shared_informer.go:247] Caches are synced for resource quota \nI0523 06:30:36.580534       1 resource_quota_controller.go:453] synced quota controller\nI0523 06:30:36.667952       1 aws.go:2501] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-23 06:29:52 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcq\",\n  InstanceId: \"i-0ec4cc948b7b1f9be\",\n  State: \"detaching\",\n  VolumeId: \"vol-09365bef81820a8cf\"\n}\nI0523 06:30:36.667998       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"aws-vhqr2\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-09365bef81820a8cf\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:30:36.703446       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0523 06:30:36.703507       1 shared_informer.go:247] Caches are synced for garbage collector \nI0523 06:30:36.703514       1 garbagecollector.go:240] synced garbage collector\nI0523 06:30:37.254073       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0523 06:30:38.077908       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-981/pvc-hstsb\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-981\\\" or manually created by system administrator\"\nI0523 06:30:38.150020       1 pv_controller.go:859] volume \"pvc-ce753916-8267-420c-a760-dcb5dff9cb7c\" entered phase \"Bound\"\nI0523 06:30:38.150050       1 pv_controller.go:962] volume \"pvc-ce753916-8267-420c-a760-dcb5dff9cb7c\" bound to claim \"csi-mock-volumes-981/pvc-hstsb\"\nI0523 06:30:38.182169       1 pv_controller.go:803] claim \"csi-mock-volumes-981/pvc-hstsb\" entered phase \"Bound\"\nE0523 06:30:38.863722       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:30:39.415286       1 namespace_controller.go:185] Namespace has been deleted projected-1210\nE0523 06:30:39.571519       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-1288/default: secrets \"default-token-xbg5z\" is forbidden: unable to create new content in namespace kubectl-1288 because it is being terminated\nI0523 06:30:39.753122       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-1657/pod-f719e1e4-f066-426e-b6b1-aa8a93f2698b uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-qw69n pvc- persistent-local-volumes-test-1657 /api/v1/namespaces/persistent-local-volumes-test-1657/persistentvolumeclaims/pvc-qw69n 6a73874e-ec69-4a10-8fd5-eb6fdd19678f 7125 0 2021-05-23 06:30:32 +0000 UTC 2021-05-23 06:30:39 +0000 UTC 0xc0009e9488 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:32 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvj6b6m,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-1657,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:30:39.753207       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-1657/pvc-qw69n because it is still being used\nI0523 06:30:39.808882       1 aws.go:2501] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-23 06:30:01 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdba\",\n  InstanceId: \"i-00a61631f958158b2\",\n  State: \"detaching\",\n  VolumeId: \"vol-0aeedb9e5743388a2\"\n}\nI0523 06:30:39.808931       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:39.849696       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") from node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:39.867524       1 pvc_protection_controller.go:291] PVC provisioning-7363/pvc-sxgvl is unused\nI0523 06:30:39.874313       1 pv_controller.go:633] volume \"local-tscgx\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:39.878050       1 pv_controller.go:859] volume \"local-tscgx\" entered phase \"Released\"\nI0523 06:30:39.891068       1 aws.go:1998] Assigned mount device bb -> volume vol-0aeedb9e5743388a2\nI0523 06:30:39.905102       1 pv_controller_base.go:500] deletion of claim \"provisioning-7363/pvc-sxgvl\" was already processed\nI0523 06:30:40.001145       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-9726\nE0523 06:30:40.158985       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:30:40.191913       1 aws.go:2411] AttachVolume volume=\"vol-0aeedb9e5743388a2\" instance=\"i-00a61631f958158b2\" request returned {\n  AttachTime: 2021-05-23 06:30:40.182 +0000 UTC,\n  Device: \"/dev/xvdbb\",\n  InstanceId: \"i-00a61631f958158b2\",\n  State: \"attaching\",\n  VolumeId: \"vol-0aeedb9e5743388a2\"\n}\nI0523 06:30:40.677095       1 pv_controller.go:859] volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" entered phase \"Bound\"\nI0523 06:30:40.677145       1 pv_controller.go:962] volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" bound to claim \"provisioning-6143/csi-hostpathh67lk\"\nI0523 06:30:40.682631       1 pv_controller.go:803] claim \"provisioning-6143/csi-hostpathh67lk\" entered phase \"Bound\"\nI0523 06:30:41.166307       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0b180b48da07df0c3\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:30:41.443090       1 pvc_protection_controller.go:291] PVC provisioning-5628/nfs4vnmz is unused\nI0523 06:30:41.447897       1 pv_controller.go:633] volume \"pvc-6d3f4756-f637-4104-a4df-9b61bf073c14\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:30:41.451180       1 pv_controller.go:859] volume \"pvc-6d3f4756-f637-4104-a4df-9b61bf073c14\" entered phase \"Released\"\nI0523 06:30:41.453128       1 pv_controller.go:1321] isVolumeReleased[pvc-6d3f4756-f637-4104-a4df-9b61bf073c14]: volume is released\nI0523 06:30:41.468307       1 pv_controller_base.go:500] deletion of claim \"provisioning-5628/nfs4vnmz\" was already processed\nI0523 06:30:41.631280       1 event.go:291] \"Event occurred\" object=\"provisioning-8562/awszg62z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0523 06:30:42.244581       1 pvc_protection_controller.go:291] PVC pv-5021/pvc-7ds8t is unused\nI0523 06:30:42.249498       1 pv_controller.go:633] volume \"nfs-bc2zc\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:42.252191       1 pv_controller.go:859] volume \"nfs-bc2zc\" entered phase \"Released\"\nI0523 06:30:42.303943       1 aws.go:2021] Releasing in-process attachment entry: bb -> volume vol-0aeedb9e5743388a2\nI0523 06:30:42.303986       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") from node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:42.304117       1 event.go:291] \"Event occurred\" object=\"volume-2954/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c34ad428-2544-4cd7-abef-217162aaecde\\\" \"\nI0523 06:30:42.389977       1 pv_controller_base.go:500] deletion of claim \"pv-5021/pvc-7ds8t\" was already processed\nE0523 06:30:42.725727       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:30:42.760715       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-6143^64da217a-bb90-11eb-b06d-82bf8f3b607e\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:30:42.771520       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-6143^64da217a-bb90-11eb-b06d-82bf8f3b607e\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:30:42.771702       1 event.go:291] \"Event occurred\" object=\"provisioning-6143/pod-subpath-test-dynamicpv-nzst\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\\\" \"\nE0523 06:30:42.855538       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-4828/pvc-k645c: storageclass.storage.k8s.io \"provisioning-4828\" not found\nI0523 06:30:42.855788       1 event.go:291] \"Event occurred\" object=\"provisioning-4828/pvc-k645c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4828\\\" not found\"\nI0523 06:30:42.896669       1 pv_controller.go:859] volume \"local-shrq9\" entered phase \"Available\"\nI0523 06:30:43.671384       1 pvc_protection_controller.go:291] PVC provisioning-896/pvc-vqmjr is unused\nI0523 06:30:43.677151       1 pv_controller.go:633] volume \"local-rgjkr\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:43.679981       1 pv_controller.go:859] volume \"local-rgjkr\" entered phase \"Released\"\nI0523 06:30:43.709761       1 pv_controller_base.go:500] deletion of claim \"provisioning-896/pvc-vqmjr\" was already processed\nI0523 06:30:44.062388       1 event.go:291] \"Event occurred\" object=\"provisioning-4582/pvc-qfmx5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-4582\\\" or manually created by system administrator\"\nI0523 06:30:44.062574       1 event.go:291] \"Event occurred\" object=\"provisioning-4582/pvc-qfmx5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-4582\\\" or manually created by system administrator\"\nI0523 06:30:45.084269       1 pv_controller.go:910] claim \"provisioning-4828/pvc-k645c\" bound to volume \"local-shrq9\"\nI0523 06:30:45.084446       1 event.go:291] \"Event occurred\" object=\"provisioning-4582/pvc-qfmx5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-4582\\\" or manually created by system administrator\"\nI0523 06:30:45.084465       1 event.go:291] \"Event occurred\" object=\"volume-expand-7131/awsw5fgn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0523 06:30:45.087516       1 pv_controller.go:1321] isVolumeReleased[pvc-f4b60888-ec8d-4e9c-a202-ea999f577415]: volume is released\nI0523 06:30:45.090715       1 pv_controller.go:1321] isVolumeReleased[pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0]: volume is released\nI0523 06:30:45.094737       1 pv_controller.go:859] volume \"local-shrq9\" entered phase \"Bound\"\nI0523 06:30:45.094758       1 pv_controller.go:962] volume \"local-shrq9\" bound to claim \"provisioning-4828/pvc-k645c\"\nI0523 06:30:45.099923       1 pv_controller.go:803] claim \"provisioning-4828/pvc-k645c\" entered phase \"Bound\"\nI0523 06:30:45.231993       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ca-central-1a/vol-0530e6a5e20d73550\nI0523 06:30:45.232029       1 pv_controller.go:1416] volume \"pvc-f4b60888-ec8d-4e9c-a202-ea999f577415\" deleted\nI0523 06:30:45.241375       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ca-central-1a/vol-0b180b48da07df0c3\nI0523 06:30:45.241395       1 pv_controller.go:1416] volume \"pvc-bf3114a7-6d2d-4fa1-930c-7662e12abda0\" deleted\nI0523 06:30:45.243740       1 pv_controller_base.go:500] deletion of claim \"provisioning-320/awsl5n9z\" was already processed\nI0523 06:30:45.252898       1 pv_controller_base.go:500] deletion of claim \"volume-6414/awslqzmt\" was already processed\nE0523 06:30:45.289554       1 tokens_controller.go:261] error synchronizing serviceaccount persistent-local-volumes-test-1657/default: secrets \"default-token-bmplc\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-1657 because it is being terminated\nI0523 06:30:45.355181       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-1657/pvc-qw69n is unused\nI0523 06:30:45.361765       1 pv_controller.go:633] volume \"local-pvj6b6m\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:45.365367       1 pv_controller.go:859] volume \"local-pvj6b6m\" entered phase \"Released\"\nI0523 06:30:45.389249       1 pv_controller_base.go:500] deletion of claim \"persistent-local-volumes-test-1657/pvc-qw69n\" was already processed\nI0523 06:30:45.514171       1 pv_controller.go:859] volume \"pvc-76cfdc98-e0b3-4c5b-b780-8d6ab68720fc\" entered phase \"Bound\"\nI0523 06:30:45.514200       1 pv_controller.go:962] volume \"pvc-76cfdc98-e0b3-4c5b-b780-8d6ab68720fc\" bound to claim \"provisioning-4582/pvc-qfmx5\"\nI0523 06:30:45.520417       1 pv_controller.go:803] claim \"provisioning-4582/pvc-qfmx5\" entered phase \"Bound\"\nI0523 06:30:46.029539       1 event.go:291] \"Event occurred\" object=\"provisioning-5480-5517/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nE0523 06:30:46.043293       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:30:46.145039       1 event.go:291] \"Event occurred\" object=\"provisioning-5480-5517/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0523 06:30:46.218764       1 event.go:291] \"Event occurred\" object=\"provisioning-5480-5517/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0523 06:30:46.293244       1 event.go:291] \"Event occurred\" object=\"provisioning-5480-5517/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0523 06:30:46.390048       1 event.go:291] \"Event occurred\" object=\"provisioning-5480-5517/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0523 06:30:46.390156       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-7363/default: secrets \"default-token-z7vg2\" is forbidden: unable to create new content in namespace provisioning-7363 because it is being terminated\nI0523 06:30:46.492932       1 event.go:291] \"Event occurred\" object=\"provisioning-5480/csi-hostpathr7w77\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5480\\\" or manually created by system administrator\"\nI0523 06:30:46.492959       1 event.go:291] \"Event occurred\" object=\"provisioning-5480/csi-hostpathr7w77\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-5480\\\" or manually created by system administrator\"\nE0523 06:30:46.648887       1 tokens_controller.go:261] error synchronizing serviceaccount crd-publish-openapi-6138/default: secrets \"default-token-sjnr2\" is forbidden: unable to create new content in namespace crd-publish-openapi-6138 because it is being terminated\nI0523 06:30:47.028398       1 aws_util.go:113] Successfully created EBS Disk volume aws://ca-central-1a/vol-03ca0c16687584cfe\nI0523 06:30:47.069935       1 pv_controller.go:1647] volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" provisioned for claim \"provisioning-8562/awszg62z\"\nI0523 06:30:47.070169       1 event.go:291] \"Event occurred\" object=\"provisioning-8562/awszg62z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230 using kubernetes.io/aws-ebs\"\nI0523 06:30:47.073360       1 pv_controller.go:859] volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" entered phase \"Bound\"\nI0523 06:30:47.073384       1 pv_controller.go:962] volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" bound to claim \"provisioning-8562/awszg62z\"\nI0523 06:30:47.079022       1 pv_controller.go:803] claim \"provisioning-8562/awszg62z\" entered phase \"Bound\"\nI0523 06:30:47.778033       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") from node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:47.825365       1 aws.go:1998] Assigned mount device cl -> volume vol-03ca0c16687584cfe\nE0523 06:30:48.004914       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-9388/pvc-7t8bw: storageclass.storage.k8s.io \"volume-9388\" not found\nI0523 06:30:48.005211       1 event.go:291] \"Event occurred\" object=\"volume-9388/pvc-7t8bw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-9388\\\" not found\"\nI0523 06:30:48.046028       1 pv_controller.go:859] volume \"local-qz7nv\" entered phase \"Available\"\nI0523 06:30:48.103777       1 aws.go:2411] AttachVolume volume=\"vol-03ca0c16687584cfe\" instance=\"i-00a61631f958158b2\" request returned {\n  AttachTime: 2021-05-23 06:30:48.092 +0000 UTC,\n  Device: \"/dev/xvdcl\",\n  InstanceId: \"i-00a61631f958158b2\",\n  State: \"attaching\",\n  VolumeId: \"vol-03ca0c16687584cfe\"\n}\nI0523 06:30:48.473036       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-981/pvc-hstsb is unused\nI0523 06:30:48.476432       1 pv_controller.go:633] volume \"pvc-ce753916-8267-420c-a760-dcb5dff9cb7c\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:30:48.479718       1 pv_controller.go:859] volume \"pvc-ce753916-8267-420c-a760-dcb5dff9cb7c\" entered phase \"Released\"\nI0523 06:30:48.480822       1 pv_controller.go:1321] isVolumeReleased[pvc-ce753916-8267-420c-a760-dcb5dff9cb7c]: volume is released\nI0523 06:30:48.538960       1 pv_controller_base.go:500] deletion of claim \"csi-mock-volumes-981/pvc-hstsb\" was already processed\nI0523 06:30:49.327348       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7423/pod-b6e0cc5e-9a36-490f-8168-6af6539c7b1a uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-jsvk2 pvc- persistent-local-volumes-test-7423 /api/v1/namespaces/persistent-local-volumes-test-7423/persistentvolumeclaims/pvc-jsvk2 25696c03-7864-4811-856b-01494b01c5e4 7647 0 2021-05-23 06:30:31 +0000 UTC 2021-05-23 06:30:49 +0000 UTC 0xc0021d1998 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vzpc,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7423,VolumeMode:*Block,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:30:49.327396       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7423/pvc-jsvk2 because it is still being used\nE0523 06:30:49.462055       1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-134/default: secrets \"default-token-2cvd6\" is forbidden: unable to create new content in namespace downward-api-134 because it is being terminated\nE0523 06:30:49.875825       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-896/default: secrets \"default-token-mkvbj\" is forbidden: unable to create new content in namespace provisioning-896 because it is being terminated\nI0523 06:30:50.199380       1 aws.go:2021] Releasing in-process attachment entry: cl -> volume vol-03ca0c16687584cfe\nI0523 06:30:50.199429       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") from node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:30:50.199673       1 event.go:291] \"Event occurred\" object=\"provisioning-8562/pod-subpath-test-dynamicpv-h8vj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\\\" \"\nI0523 06:30:50.400664       1 pv_controller.go:859] volume \"hostpath-6d5rj\" entered phase \"Available\"\nI0523 06:30:50.503633       1 pv_controller.go:910] claim \"pv-protection-5266/pvc-gmjr5\" bound to volume \"hostpath-6d5rj\"\nI0523 06:30:50.509796       1 pv_controller.go:859] volume \"hostpath-6d5rj\" entered phase \"Bound\"\nI0523 06:30:50.509823       1 pv_controller.go:962] volume \"hostpath-6d5rj\" bound to claim \"pv-protection-5266/pvc-gmjr5\"\nI0523 06:30:50.516745       1 pv_controller.go:803] claim \"pv-protection-5266/pvc-gmjr5\" entered phase \"Bound\"\nI0523 06:30:50.643059       1 pvc_protection_controller.go:291] PVC pv-protection-5266/pvc-gmjr5 is unused\nI0523 06:30:50.651337       1 pv_controller.go:633] volume \"hostpath-6d5rj\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:50.654727       1 pv_controller.go:859] volume \"hostpath-6d5rj\" entered phase \"Released\"\nI0523 06:30:50.657766       1 pv_controller_base.go:500] deletion of claim \"pv-protection-5266/pvc-gmjr5\" was already processed\nI0523 06:30:51.058139       1 pvc_protection_controller.go:291] PVC volume-7860/csi-hostpathm674q is unused\nI0523 06:30:51.061703       1 pv_controller.go:633] volume \"pvc-79f7c3fe-8d6d-49df-bb91-063c57b659ad\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:30:51.065019       1 pv_controller.go:859] volume \"pvc-79f7c3fe-8d6d-49df-bb91-063c57b659ad\" entered phase \"Released\"\nI0523 06:30:51.066726       1 pv_controller.go:1321] isVolumeReleased[pvc-79f7c3fe-8d6d-49df-bb91-063c57b659ad]: volume is released\nI0523 06:30:51.095768       1 pv_controller_base.go:500] deletion of claim \"volume-7860/csi-hostpathm674q\" was already processed\nI0523 06:30:51.159118       1 pv_controller.go:859] volume \"local-pv9g7kp\" entered phase \"Available\"\nI0523 06:30:51.190072       1 pv_controller.go:910] claim \"persistent-local-volumes-test-433/pvc-blrl4\" bound to volume \"local-pv9g7kp\"\nI0523 06:30:51.196537       1 pv_controller.go:859] volume \"local-pv9g7kp\" entered phase \"Bound\"\nI0523 06:30:51.196557       1 pv_controller.go:962] volume \"local-pv9g7kp\" bound to claim \"persistent-local-volumes-test-433/pvc-blrl4\"\nI0523 06:30:51.201532       1 pv_controller.go:803] claim \"persistent-local-volumes-test-433/pvc-blrl4\" entered phase \"Bound\"\nE0523 06:30:51.314865       1 tokens_controller.go:261] error synchronizing serviceaccount volume-6414/default: secrets \"default-token-9fj5g\" is forbidden: unable to create new content in namespace volume-6414 because it is being terminated\nI0523 06:30:51.435142       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-7578/ss2-0\" objectUID=89b4f4d5-f3c9-4069-9a30-23b3cdab2b90 kind=\"CiliumEndpoint\"\nI0523 06:30:51.445684       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0523 06:30:51.487639       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-7578/ss2-1\" objectUID=876ab328-eb21-4393-8a01-2958d8614854 kind=\"CiliumEndpoint\"\nI0523 06:30:51.511983       1 pvc_protection_controller.go:291] PVC provisioning-4828/pvc-k645c is unused\nI0523 06:30:51.524137       1 pv_controller.go:633] volume \"local-shrq9\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:51.529402       1 pv_controller.go:859] volume \"local-shrq9\" entered phase \"Released\"\nI0523 06:30:51.553559       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-7578/ss2-2\" objectUID=4d20f570-388e-4937-bc23-d563cf7c5427 kind=\"CiliumEndpoint\"\nW0523 06:30:51.554939       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"statefulset-7578/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0523 06:30:51.562916       1 endpoints_controller.go:336] \"Error syncing endpoints, retrying\" service=\"statefulset-7578/test\" err=\"Operation cannot be fulfilled on endpoints \\\"test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:30:51.563138       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint statefulset-7578/test: Operation cannot be fulfilled on endpoints \\\"test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0523 06:30:51.592728       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-320/default: secrets \"default-token-pkqsp\" is forbidden: unable to create new content in namespace provisioning-320 because it is being terminated\nI0523 06:30:51.592925       1 pv_controller_base.go:500] deletion of claim \"provisioning-4828/pvc-k645c\" was already processed\nI0523 06:30:51.660541       1 namespace_controller.go:185] Namespace has been deleted volume-4475\nI0523 06:30:51.697803       1 namespace_controller.go:185] Namespace has been deleted provisioning-7363\nI0523 06:30:51.807100       1 namespace_controller.go:185] Namespace has been deleted kubectl-4890\nI0523 06:30:51.828541       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-6138\nE0523 06:30:52.037643       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-9607/default: secrets \"default-token-tzlfd\" is forbidden: unable to create new content in namespace kubectl-9607 because it is being terminated\nI0523 06:30:52.165997       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7796-3761/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nE0523 06:30:52.245327       1 namespace_controller.go:162] deletion of namespace kubectl-6488 failed: unexpected items still remain in namespace: kubectl-6488 for gvr: /v1, Resource=pods\nE0523 06:30:52.820927       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-5628/default: secrets \"default-token-c6hmv\" is forbidden: unable to create new content in namespace provisioning-5628 because it is being terminated\nI0523 06:30:53.018719       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-79f7c3fe-8d6d-49df-bb91-063c57b659ad\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7860^4f8a5198-bb90-11eb-bbf4-82694c1169ca\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:30:53.021259       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7423/pod-b6e0cc5e-9a36-490f-8168-6af6539c7b1a uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-jsvk2 pvc- persistent-local-volumes-test-7423 /api/v1/namespaces/persistent-local-volumes-test-7423/persistentvolumeclaims/pvc-jsvk2 25696c03-7864-4811-856b-01494b01c5e4 7647 0 2021-05-23 06:30:31 +0000 UTC 2021-05-23 06:30:49 +0000 UTC 0xc0021d1998 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vzpc,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7423,VolumeMode:*Block,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:30:53.021372       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7423/pvc-jsvk2 because it is still being used\nI0523 06:30:53.022873       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-79f7c3fe-8d6d-49df-bb91-063c57b659ad\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7860^4f8a5198-bb90-11eb-bbf4-82694c1169ca\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:30:53.027121       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-79f7c3fe-8d6d-49df-bb91-063c57b659ad\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-7860^4f8a5198-bb90-11eb-bbf4-82694c1169ca\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nE0523 06:30:53.177105       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:30:53.319338       1 pv_controller.go:859] volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" entered phase \"Bound\"\nI0523 06:30:53.319371       1 pv_controller.go:962] volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" bound to claim \"provisioning-5480/csi-hostpathr7w77\"\nI0523 06:30:53.325926       1 pv_controller.go:803] claim \"provisioning-5480/csi-hostpathr7w77\" entered phase \"Bound\"\nI0523 06:30:53.559954       1 pv_controller.go:859] volume \"local-pvpt2pq\" entered phase \"Available\"\nI0523 06:30:53.590773       1 pv_controller.go:910] claim \"persistent-local-volumes-test-8699/pvc-mgc4m\" bound to volume \"local-pvpt2pq\"\nI0523 06:30:53.597027       1 pv_controller.go:859] volume \"local-pvpt2pq\" entered phase \"Bound\"\nI0523 06:30:53.597053       1 pv_controller.go:962] volume \"local-pvpt2pq\" bound to claim \"persistent-local-volumes-test-8699/pvc-mgc4m\"\nI0523 06:30:53.603876       1 pv_controller.go:803] claim \"persistent-local-volumes-test-8699/pvc-mgc4m\" entered phase \"Bound\"\nI0523 06:30:54.824591       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5480^6c623faa-bb90-11eb-a1ce-6eeb2c65d25c\") from node \"ip-172-20-52-132.ca-central-1.compute.internal\" \nI0523 06:30:54.843611       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5480^6c623faa-bb90-11eb-a1ce-6eeb2c65d25c\") from node \"ip-172-20-52-132.ca-central-1.compute.internal\" \nI0523 06:30:54.843849       1 event.go:291] \"Event occurred\" object=\"provisioning-5480/pod-subpath-test-dynamicpv-vpdg\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\\\" \"\nI0523 06:30:54.945471       1 namespace_controller.go:185] Namespace has been deleted provisioning-896\nI0523 06:30:55.101854       1 namespace_controller.go:185] Namespace has been deleted kubectl-1288\nE0523 06:30:55.211892       1 tokens_controller.go:261] error synchronizing serviceaccount pods-5294/default: secrets \"default-token-l84gh\" is forbidden: unable to create new content in namespace pods-5294 because it is being terminated\nE0523 06:30:55.396109       1 tokens_controller.go:261] error synchronizing serviceaccount persistent-local-volumes-test-7423/default: secrets \"default-token-f7npf\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-7423 because it is being terminated\nI0523 06:30:55.478788       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7423/pod-b6e0cc5e-9a36-490f-8168-6af6539c7b1a uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-jsvk2 pvc- persistent-local-volumes-test-7423 /api/v1/namespaces/persistent-local-volumes-test-7423/persistentvolumeclaims/pvc-jsvk2 25696c03-7864-4811-856b-01494b01c5e4 7647 0 2021-05-23 06:30:31 +0000 UTC 2021-05-23 06:30:49 +0000 UTC 0xc0021d1998 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vzpc,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7423,VolumeMode:*Block,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:30:55.478853       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7423/pvc-jsvk2 because it is still being used\nI0523 06:30:55.482406       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7423/pod-b6e0cc5e-9a36-490f-8168-6af6539c7b1a uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-jsvk2 pvc- persistent-local-volumes-test-7423 /api/v1/namespaces/persistent-local-volumes-test-7423/persistentvolumeclaims/pvc-jsvk2 25696c03-7864-4811-856b-01494b01c5e4 7647 0 2021-05-23 06:30:31 +0000 UTC 2021-05-23 06:30:49 +0000 UTC 0xc0021d1998 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vzpc,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7423,VolumeMode:*Block,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:30:55.482454       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7423/pvc-jsvk2 because it is still being used\nI0523 06:30:55.485383       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-7423/pod-b6e0cc5e-9a36-490f-8168-6af6539c7b1a uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-jsvk2 pvc- persistent-local-volumes-test-7423 /api/v1/namespaces/persistent-local-volumes-test-7423/persistentvolumeclaims/pvc-jsvk2 25696c03-7864-4811-856b-01494b01c5e4 7647 0 2021-05-23 06:30:31 +0000 UTC 2021-05-23 06:30:49 +0000 UTC 0xc0021d1998 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv8vzpc,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-7423,VolumeMode:*Block,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:30:55.485435       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-7423/pvc-jsvk2 because it is still being used\nI0523 06:30:55.489788       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-7423/pvc-jsvk2 is unused\nI0523 06:30:55.500847       1 pv_controller.go:633] volume \"local-pv8vzpc\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:55.506449       1 pv_controller.go:859] volume \"local-pv8vzpc\" entered phase \"Released\"\nI0523 06:30:55.511323       1 pv_controller_base.go:500] deletion of claim \"persistent-local-volumes-test-7423/pvc-jsvk2\" was already processed\nI0523 06:30:55.531836       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1657\nE0523 06:30:55.603050       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-7994/pvc-kwvdl: storageclass.storage.k8s.io \"volume-7994\" not found\nI0523 06:30:55.603346       1 event.go:291] \"Event occurred\" object=\"volume-7994/pvc-kwvdl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-7994\\\" not found\"\nI0523 06:30:55.637262       1 event.go:291] \"Event occurred\" object=\"volume-expand-7131/awsw5fgn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0523 06:30:55.640797       1 pvc_protection_controller.go:291] PVC volume-expand-7131/awsw5fgn is unused\nI0523 06:30:55.658014       1 pv_controller.go:859] volume \"local-kvvcf\" entered phase \"Available\"\nE0523 06:30:55.856905       1 tokens_controller.go:261] error synchronizing serviceaccount pv-protection-5266/default: secrets \"default-token-llp4s\" is forbidden: unable to create new content in namespace pv-protection-5266 because it is being terminated\nI0523 06:30:55.942419       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:30:55.945416       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nE0523 06:30:56.235101       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-8206/pvc-vjn69: storageclass.storage.k8s.io \"volume-8206\" not found\nI0523 06:30:56.235429       1 event.go:291] \"Event occurred\" object=\"volume-8206/pvc-vjn69\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8206\\\" not found\"\nI0523 06:30:56.274535       1 pv_controller.go:859] volume \"local-84svb\" entered phase \"Available\"\nI0523 06:30:56.518548       1 namespace_controller.go:185] Namespace has been deleted volume-6414\nI0523 06:30:56.659791       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-981-6724/csi-mockplugin-6649b5f444\" objectUID=072f5858-86c8-4227-9af7-17ccd23aebaf kind=\"ControllerRevision\"\nI0523 06:30:56.660047       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-981-6724/csi-mockplugin\nI0523 06:30:56.660104       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-981-6724/csi-mockplugin-0\" objectUID=0d9acc2d-c6c0-4b4c-acea-b7c9dfa86fe5 kind=\"Pod\"\nI0523 06:30:56.662934       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-981-6724/csi-mockplugin-0\" objectUID=0d9acc2d-c6c0-4b4c-acea-b7c9dfa86fe5 kind=\"Pod\" propagationPolicy=Background\nI0523 06:30:56.663185       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-981-6724/csi-mockplugin-6649b5f444\" objectUID=072f5858-86c8-4227-9af7-17ccd23aebaf kind=\"ControllerRevision\" propagationPolicy=Background\nE0523 06:30:56.903249       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-8165/pvc-jrgtf: storageclass.storage.k8s.io \"provisioning-8165\" not found\nI0523 06:30:56.903516       1 event.go:291] \"Event occurred\" object=\"provisioning-8165/pvc-jrgtf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8165\\\" not found\"\nI0523 06:30:56.941138       1 pv_controller.go:859] volume \"local-kfjnf\" entered phase \"Available\"\nI0523 06:30:57.079882       1 namespace_controller.go:185] Namespace has been deleted provisioning-320\nI0523 06:30:57.250436       1 namespace_controller.go:185] Namespace has been deleted statefulset-2598\nI0523 06:30:57.257014       1 namespace_controller.go:185] Namespace has been deleted webhook-9453\nI0523 06:30:57.273531       1 namespace_controller.go:185] Namespace has been deleted kubectl-9607\nE0523 06:30:57.421898       1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-6362/default: secrets \"default-token-97vbv\" is forbidden: unable to create new content in namespace downward-api-6362 because it is being terminated\nI0523 06:30:57.487069       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7796/pvc-sl5k2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0523 06:30:57.526988       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-7796/pvc-sl5k2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7796\\\" or manually created by system administrator\"\nI0523 06:30:57.551672       1 pv_controller.go:859] volume \"pvc-39d90e63-78db-4a16-a17b-265e6f8aab14\" entered phase \"Bound\"\nI0523 06:30:57.551699       1 pv_controller.go:962] volume \"pvc-39d90e63-78db-4a16-a17b-265e6f8aab14\" bound to claim \"csi-mock-volumes-7796/pvc-sl5k2\"\nI0523 06:30:57.557676       1 pv_controller.go:803] claim \"csi-mock-volumes-7796/pvc-sl5k2\" entered phase \"Bound\"\nE0523 06:30:57.601347       1 tokens_controller.go:261] error synchronizing serviceaccount pv-5021/default: secrets \"default-token-k7zl2\" is forbidden: unable to create new content in namespace pv-5021 because it is being terminated\nI0523 06:30:57.909075       1 namespace_controller.go:185] Namespace has been deleted provisioning-5628\nI0523 06:30:59.224115       1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-5729/test-quota\nI0523 06:30:59.852922       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-981\nI0523 06:30:59.963327       1 pvc_protection_controller.go:291] PVC volume-8646/pvc-jdql9 is unused\nI0523 06:30:59.970349       1 pv_controller.go:633] volume \"local-8fdrs\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:30:59.973467       1 pv_controller.go:859] volume \"local-8fdrs\" entered phase \"Released\"\nI0523 06:31:00.001359       1 pv_controller_base.go:500] deletion of claim \"volume-8646/pvc-jdql9\" was already processed\nI0523 06:31:00.039267       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0523 06:31:00.083992       1 pv_controller.go:910] claim \"volume-9388/pvc-7t8bw\" bound to volume \"local-qz7nv\"\nI0523 06:31:00.090357       1 pv_controller.go:859] volume \"local-qz7nv\" entered phase \"Bound\"\nI0523 06:31:00.090384       1 pv_controller.go:962] volume \"local-qz7nv\" bound to claim \"volume-9388/pvc-7t8bw\"\nI0523 06:31:00.095932       1 pv_controller.go:803] claim \"volume-9388/pvc-7t8bw\" entered phase \"Bound\"\nI0523 06:31:00.096300       1 pv_controller.go:910] claim \"volume-8206/pvc-vjn69\" bound to volume \"local-84svb\"\nI0523 06:31:00.106326       1 pv_controller.go:859] volume \"local-84svb\" entered phase \"Bound\"\nI0523 06:31:00.106348       1 pv_controller.go:962] volume \"local-84svb\" bound to claim \"volume-8206/pvc-vjn69\"\nI0523 06:31:00.115611       1 pv_controller.go:803] claim \"volume-8206/pvc-vjn69\" entered phase \"Bound\"\nI0523 06:31:00.115794       1 pv_controller.go:910] claim \"volume-7994/pvc-kwvdl\" bound to volume \"local-kvvcf\"\nI0523 06:31:00.124316       1 pv_controller.go:859] volume \"local-kvvcf\" entered phase \"Bound\"\nI0523 06:31:00.124345       1 pv_controller.go:962] volume \"local-kvvcf\" bound to claim \"volume-7994/pvc-kwvdl\"\nI0523 06:31:00.129704       1 pv_controller.go:803] claim \"volume-7994/pvc-kwvdl\" entered phase \"Bound\"\nI0523 06:31:00.129768       1 pv_controller.go:910] claim \"provisioning-8165/pvc-jrgtf\" bound to volume \"local-kfjnf\"\nI0523 06:31:00.138776       1 pv_controller.go:859] volume \"local-kfjnf\" entered phase \"Bound\"\nI0523 06:31:00.138797       1 pv_controller.go:962] volume \"local-kfjnf\" bound to claim \"provisioning-8165/pvc-jrgtf\"\nI0523 06:31:00.144634       1 pv_controller.go:803] claim \"provisioning-8165/pvc-jrgtf\" entered phase \"Bound\"\nI0523 06:31:00.174852       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-433/pod-948cf4d0-948a-41c9-af0e-6167c4b8a07b uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-blrl4 pvc- persistent-local-volumes-test-433 /api/v1/namespaces/persistent-local-volumes-test-433/persistentvolumeclaims/pvc-blrl4 a98ae2cf-90b0-4d0d-b936-1a2d11599f23 8240 0 2021-05-23 06:30:51 +0000 UTC 2021-05-23 06:31:00 +0000 UTC 0xc002cac328 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:51 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pv9g7kp,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-433,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:31:00.174947       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-433/pvc-blrl4 because it is still being used\nE0523 06:31:00.620807       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:31:00.772190       1 namespace_controller.go:185] Namespace has been deleted volume-665\nI0523 06:31:00.895541       1 namespace_controller.go:185] Namespace has been deleted pv-protection-5266\nI0523 06:31:01.279913       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:01.363101       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") from node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:01.407677       1 aws.go:1998] Assigned mount device bq -> volume vol-0d09414535beab08c\nI0523 06:31:01.715980       1 aws.go:2411] AttachVolume volume=\"vol-0d09414535beab08c\" instance=\"i-0ec4cc948b7b1f9be\" request returned {\n  AttachTime: 2021-05-23 06:31:01.705 +0000 UTC,\n  Device: \"/dev/xvdbq\",\n  InstanceId: \"i-0ec4cc948b7b1f9be\",\n  State: \"attaching\",\n  VolumeId: \"vol-0d09414535beab08c\"\n}\nE0523 06:31:01.775651       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-981-6724/default: secrets \"default-token-jq92c\" is forbidden: unable to create new content in namespace csi-mock-volumes-981-6724 because it is being terminated\nI0523 06:31:01.991977       1 event.go:291] \"Event occurred\" object=\"webhook-8533/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0523 06:31:01.992336       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-8533/sample-webhook-deployment-cbccbf6bb\" need=1 creating=1\nI0523 06:31:02.005930       1 event.go:291] \"Event occurred\" object=\"webhook-8533/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-hpk8b\"\nI0523 06:31:02.010712       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-8533/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:02.183899       1 namespace_controller.go:185] Namespace has been deleted provisioning-4828\nI0523 06:31:02.511452       1 namespace_controller.go:185] Namespace has been deleted downward-api-6362\nI0523 06:31:02.687797       1 namespace_controller.go:185] Namespace has been deleted pv-5021\nI0523 06:31:02.975928       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-attacher-mdwtk\" objectUID=7644b8ab-d391-490b-a228-bd12ef1ee066 kind=\"EndpointSlice\"\nI0523 06:31:02.981002       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-attacher-mdwtk\" objectUID=7644b8ab-d391-490b-a228-bd12ef1ee066 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:03.026240       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-attacher-0\" objectUID=d98f5c6d-dd85-41e8-9ad0-3e3c4b9da4db kind=\"Pod\"\nI0523 06:31:03.026458       1 stateful_set.go:419] StatefulSet has been deleted volume-7860-3756/csi-hostpath-attacher\nI0523 06:31:03.026541       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-attacher-7bbf48d45d\" objectUID=f28f96fb-09c7-4824-9b94-72d107acb16a kind=\"ControllerRevision\"\nI0523 06:31:03.028084       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-attacher-7bbf48d45d\" objectUID=f28f96fb-09c7-4824-9b94-72d107acb16a kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:03.028084       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-attacher-0\" objectUID=d98f5c6d-dd85-41e8-9ad0-3e3c4b9da4db kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:03.078775       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-6143^64da217a-bb90-11eb-b06d-82bf8f3b607e\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:03.093030       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-6143^64da217a-bb90-11eb-b06d-82bf8f3b607e\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:03.114098       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpathplugin-tsw6r\" objectUID=acdadf78-3ebd-45ee-bf15-52bbb674649f kind=\"EndpointSlice\"\nI0523 06:31:03.119414       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpathplugin-tsw6r\" objectUID=acdadf78-3ebd-45ee-bf15-52bbb674649f kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:03.119638       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-6143^64da217a-bb90-11eb-b06d-82bf8f3b607e\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:03.197796       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpathplugin-5fffbb84f4\" objectUID=b089227d-ec73-48cc-bbc8-6afaca4bd153 kind=\"ControllerRevision\"\nI0523 06:31:03.198014       1 stateful_set.go:419] StatefulSet has been deleted volume-7860-3756/csi-hostpathplugin\nI0523 06:31:03.198067       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpathplugin-0\" objectUID=c0e1ed44-db01-4875-a242-8efc323e4429 kind=\"Pod\"\nI0523 06:31:03.230188       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpathplugin-5fffbb84f4\" objectUID=b089227d-ec73-48cc-bbc8-6afaca4bd153 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:03.237524       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpathplugin-0\" objectUID=c0e1ed44-db01-4875-a242-8efc323e4429 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:03.298925       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-provisioner-2bfg5\" objectUID=f9385f6e-c4ce-438e-a7d1-a33bb517b806 kind=\"EndpointSlice\"\nI0523 06:31:03.315695       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-provisioner-2bfg5\" objectUID=f9385f6e-c4ce-438e-a7d1-a33bb517b806 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:03.327699       1 pvc_protection_controller.go:291] PVC provisioning-5480/csi-hostpathr7w77 is unused\nI0523 06:31:03.367912       1 pv_controller.go:633] volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:03.384180       1 pv_controller.go:859] volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" entered phase \"Released\"\nI0523 06:31:03.390790       1 pv_controller.go:1321] isVolumeReleased[pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd]: volume is released\nI0523 06:31:03.404351       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-provisioner-5c58b9fb8c\" objectUID=f58cee78-e523-4193-826c-01038b1da12f kind=\"ControllerRevision\"\nI0523 06:31:03.404581       1 stateful_set.go:419] StatefulSet has been deleted volume-7860-3756/csi-hostpath-provisioner\nI0523 06:31:03.404636       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-provisioner-0\" objectUID=62848050-e1c6-4d3b-a101-7fc7f9cd2d7b kind=\"Pod\"\nI0523 06:31:03.406936       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-provisioner-0\" objectUID=62848050-e1c6-4d3b-a101-7fc7f9cd2d7b kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:03.407144       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-provisioner-5c58b9fb8c\" objectUID=f58cee78-e523-4193-826c-01038b1da12f kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:03.432838       1 pv_controller_base.go:500] deletion of claim \"provisioning-5480/csi-hostpathr7w77\" was already processed\nI0523 06:31:03.443129       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-resizer-pnc2j\" objectUID=9931d7b5-74cf-4778-951f-63f261c18f69 kind=\"EndpointSlice\"\nI0523 06:31:03.446981       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-resizer-pnc2j\" objectUID=9931d7b5-74cf-4778-951f-63f261c18f69 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:03.465466       1 pvc_protection_controller.go:291] PVC provisioning-6143/csi-hostpathh67lk is unused\nI0523 06:31:03.469754       1 pv_controller.go:633] volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:03.472948       1 pv_controller.go:859] volume \"pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851\" entered phase \"Released\"\nI0523 06:31:03.481138       1 pv_controller.go:1321] isVolumeReleased[pvc-15e0782d-3f18-4e7c-8e24-224ca30e9851]: volume is released\nI0523 06:31:03.495290       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-resizer-68795fd88c\" objectUID=ecc4d653-50d3-46e2-a663-0dce1fe68e95 kind=\"ControllerRevision\"\nI0523 06:31:03.495530       1 stateful_set.go:419] StatefulSet has been deleted volume-7860-3756/csi-hostpath-resizer\nI0523 06:31:03.495584       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-resizer-0\" objectUID=de65f906-cd7f-419d-bcce-c9fd53ee20a8 kind=\"Pod\"\nI0523 06:31:03.497636       1 pv_controller_base.go:500] deletion of claim \"provisioning-6143/csi-hostpathh67lk\" was already processed\nI0523 06:31:03.498228       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-resizer-68795fd88c\" objectUID=ecc4d653-50d3-46e2-a663-0dce1fe68e95 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:03.500066       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-resizer-0\" objectUID=de65f906-cd7f-419d-bcce-c9fd53ee20a8 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:03.529993       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-snapshotter-6zlmr\" objectUID=8baee7a8-e7a7-4e90-a350-bc655d9920ce kind=\"EndpointSlice\"\nI0523 06:31:03.532014       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-snapshotter-6zlmr\" objectUID=8baee7a8-e7a7-4e90-a350-bc655d9920ce kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:03.572644       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-snapshotter-86bc85fd7\" objectUID=064477a2-2bd3-41b2-82ec-2f706842639b kind=\"ControllerRevision\"\nI0523 06:31:03.572812       1 stateful_set.go:419] StatefulSet has been deleted volume-7860-3756/csi-hostpath-snapshotter\nI0523 06:31:03.572855       1 garbagecollector.go:404] \"Processing object\" object=\"volume-7860-3756/csi-hostpath-snapshotter-0\" objectUID=03dd1686-4532-4878-b0f8-c9cee460f909 kind=\"Pod\"\nI0523 06:31:03.574440       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-snapshotter-86bc85fd7\" objectUID=064477a2-2bd3-41b2-82ec-2f706842639b kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:03.574700       1 garbagecollector.go:519] \"Deleting object\" object=\"volume-7860-3756/csi-hostpath-snapshotter-0\" objectUID=03dd1686-4532-4878-b0f8-c9cee460f909 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:03.814891       1 aws.go:2021] Releasing in-process attachment entry: bq -> volume vol-0d09414535beab08c\nI0523 06:31:03.814939       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") from node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:03.815219       1 event.go:291] \"Event occurred\" object=\"volume-882/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-volume-0\\\" \"\nI0523 06:31:04.252781       1 namespace_controller.go:185] Namespace has been deleted resourcequota-5729\nI0523 06:31:04.838673       1 pvc_protection_controller.go:291] PVC volume-8206/pvc-vjn69 is unused\nI0523 06:31:04.844427       1 pv_controller.go:633] volume \"local-84svb\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:04.847069       1 pv_controller.go:859] volume \"local-84svb\" entered phase \"Released\"\nI0523 06:31:04.875733       1 pv_controller_base.go:500] deletion of claim \"volume-8206/pvc-vjn69\" was already processed\nE0523 06:31:05.372220       1 tokens_controller.go:261] error synchronizing serviceaccount container-lifecycle-hook-5142/default: secrets \"default-token-h2nm6\" is forbidden: unable to create new content in namespace container-lifecycle-hook-5142 because it is being terminated\nI0523 06:31:05.425630       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0523 06:31:05.653904       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7423\nI0523 06:31:05.775739       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-433/pvc-blrl4 is unused\nI0523 06:31:05.804319       1 pv_controller.go:633] volume \"local-pv9g7kp\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:05.823235       1 pv_controller.go:859] volume \"local-pv9g7kp\" entered phase \"Released\"\nI0523 06:31:05.868698       1 namespace_controller.go:185] Namespace has been deleted volume-expand-7131\nI0523 06:31:05.874977       1 pv_controller_base.go:500] deletion of claim \"persistent-local-volumes-test-433/pvc-blrl4\" was already processed\nE0523 06:31:06.028876       1 tokens_controller.go:261] error synchronizing serviceaccount configmap-4776/default: secrets \"default-token-x9dz6\" is forbidden: unable to create new content in namespace configmap-4776 because it is being terminated\nE0523 06:31:06.249720       1 tokens_controller.go:261] error synchronizing serviceaccount disruption-5632/default: secrets \"default-token-kmvwq\" is forbidden: unable to create new content in namespace disruption-5632 because it is being terminated\nI0523 06:31:06.338312       1 namespace_controller.go:185] Namespace has been deleted volume-7860\nE0523 06:31:06.778209       1 tokens_controller.go:261] error synchronizing serviceaccount volume-8646/default: secrets \"default-token-4nzn4\" is forbidden: unable to create new content in namespace volume-8646 because it is being terminated\nI0523 06:31:06.982082       1 resource_quota_controller.go:434] syncing resource quota controller with updated resources from discovery: added: [], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-557-crds]\nI0523 06:31:06.982136       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0523 06:31:06.982159       1 shared_informer.go:247] Caches are synced for resource quota \nI0523 06:31:06.982166       1 resource_quota_controller.go:453] synced quota controller\nE0523 06:31:07.004796       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:31:07.032931       1 pvc_protection_controller.go:291] PVC provisioning-4582/pvc-qfmx5 is unused\nI0523 06:31:07.037044       1 pv_controller.go:633] volume \"pvc-76cfdc98-e0b3-4c5b-b780-8d6ab68720fc\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:07.040192       1 pv_controller.go:859] volume \"pvc-76cfdc98-e0b3-4c5b-b780-8d6ab68720fc\" entered phase \"Released\"\nI0523 06:31:07.041604       1 pv_controller.go:1321] isVolumeReleased[pvc-76cfdc98-e0b3-4c5b-b780-8d6ab68720fc]: volume is released\nI0523 06:31:07.061410       1 pv_controller_base.go:500] deletion of claim \"provisioning-4582/pvc-qfmx5\" was already processed\nI0523 06:31:07.104337       1 garbagecollector.go:199] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-557-crds]\nI0523 06:31:07.104419       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0523 06:31:07.104456       1 shared_informer.go:247] Caches are synced for garbage collector \nI0523 06:31:07.104463       1 garbagecollector.go:240] synced garbage collector\nI0523 06:31:07.522864       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5480^6c623faa-bb90-11eb-a1ce-6eeb2c65d25c\") on node \"ip-172-20-52-132.ca-central-1.compute.internal\" \nI0523 06:31:07.524221       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5480^6c623faa-bb90-11eb-a1ce-6eeb2c65d25c\") on node \"ip-172-20-52-132.ca-central-1.compute.internal\" \nI0523 06:31:07.528396       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-a3f1bb31-179b-4446-8c64-bf7403d4e6cd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-5480^6c623faa-bb90-11eb-a1ce-6eeb2c65d25c\") on node \"ip-172-20-52-132.ca-central-1.compute.internal\" \nI0523 06:31:08.529187       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:31:08.532939       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nE0523 06:31:08.699787       1 tokens_controller.go:261] error synchronizing serviceaccount volume-7860-3756/default: secrets \"default-token-sz8r8\" is forbidden: unable to create new content in namespace volume-7860-3756 because it is being terminated\nI0523 06:31:08.980140       1 pvc_protection_controller.go:291] PVC volume-2954/awsb9xzg is unused\nI0523 06:31:08.990167       1 pv_controller.go:633] volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:08.993860       1 pv_controller.go:859] volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" entered phase \"Released\"\nI0523 06:31:08.994989       1 pv_controller.go:1321] isVolumeReleased[pvc-c34ad428-2544-4cd7-abef-217162aaecde]: volume is released\nI0523 06:31:09.119867       1 aws_util.go:62] Error deleting EBS Disk volume aws://ca-central-1a/vol-0aeedb9e5743388a2: error deleting EBS volume \"vol-0aeedb9e5743388a2\" since volume is currently attached to \"i-00a61631f958158b2\"\nE0523 06:31:09.119924       1 goroutinemap.go:150] Operation for \"delete-pvc-c34ad428-2544-4cd7-abef-217162aaecde[b8b068b7-ebc1-4c46-a0a9-59e9af2fc48a]\" failed. No retries permitted until 2021-05-23 06:31:09.619907195 +0000 UTC m=+449.732300877 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0aeedb9e5743388a2\\\" since volume is currently attached to \\\"i-00a61631f958158b2\\\"\"\nI0523 06:31:09.120072       1 event.go:291] \"Event occurred\" object=\"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0aeedb9e5743388a2\\\" since volume is currently attached to \\\"i-00a61631f958158b2\\\"\"\nI0523 06:31:09.635024       1 garbagecollector.go:404] \"Processing object\" object=\"pods-7017/pod-submit-status-0-13\" objectUID=75898493-271b-4215-af66-badaff161ac8 kind=\"CiliumEndpoint\"\nI0523 06:31:10.037773       1 garbagecollector.go:519] \"Deleting object\" object=\"pods-7017/pod-submit-status-0-13\" objectUID=75898493-271b-4215-af66-badaff161ac8 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:10.158967       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-8533/e2e-test-webhook-4hm5h\" objectUID=e501170e-ae68-4fbc-849a-9936886f1c64 kind=\"EndpointSlice\"\nI0523 06:31:10.161643       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-8533/e2e-test-webhook-4hm5h\" objectUID=e501170e-ae68-4fbc-849a-9936886f1c64 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:10.203599       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-8533/sample-webhook-deployment-cbccbf6bb\" objectUID=38429bde-dd0a-4f99-b919-9fad76baf7de kind=\"ReplicaSet\"\nI0523 06:31:10.203817       1 deployment_controller.go:581] Deployment webhook-8533/sample-webhook-deployment has been deleted\nI0523 06:31:10.204887       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-8533/sample-webhook-deployment-cbccbf6bb\" objectUID=38429bde-dd0a-4f99-b919-9fad76baf7de kind=\"ReplicaSet\" propagationPolicy=Background\nI0523 06:31:10.209460       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-8533/sample-webhook-deployment-cbccbf6bb-hpk8b\" objectUID=3765ac5e-2963-409d-83d3-832c5aee12be kind=\"Pod\"\nI0523 06:31:10.212530       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-8533/sample-webhook-deployment-cbccbf6bb-hpk8b\" objectUID=3765ac5e-2963-409d-83d3-832c5aee12be kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:10.227803       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-8533/sample-webhook-deployment-cbccbf6bb-hpk8b\" objectUID=d31bcb82-254a-4d73-ace9-fc519cce9435 kind=\"CiliumEndpoint\"\nI0523 06:31:10.230075       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-8533/sample-webhook-deployment-cbccbf6bb-hpk8b\" objectUID=d31bcb82-254a-4d73-ace9-fc519cce9435 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:10.492596       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-attacher-jq6bm\" objectUID=6ded37c4-d465-4b09-9f38-a00d17776816 kind=\"EndpointSlice\"\nI0523 06:31:10.495220       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-attacher-jq6bm\" objectUID=6ded37c4-d465-4b09-9f38-a00d17776816 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:10.548882       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-attacher-6b476bdccd\" objectUID=90e6dc31-c7ef-4d7c-a20c-d6d72ac9ed59 kind=\"ControllerRevision\"\nI0523 06:31:10.549074       1 stateful_set.go:419] StatefulSet has been deleted provisioning-6143-3657/csi-hostpath-attacher\nI0523 06:31:10.549131       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-attacher-0\" objectUID=0b013624-3f09-465f-953e-177c5d966a7e kind=\"Pod\"\nI0523 06:31:10.554046       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9531-9854/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0523 06:31:10.560191       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-attacher-0\" objectUID=0b013624-3f09-465f-953e-177c5d966a7e kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:10.563003       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-attacher-6b476bdccd\" objectUID=90e6dc31-c7ef-4d7c-a20c-d6d72ac9ed59 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:10.589360       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9531-9854/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0523 06:31:10.625058       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9531-9854/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0523 06:31:10.645156       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpathplugin-z4p4x\" objectUID=00b30600-7f49-4312-8e5a-49bdbf6f2ed6 kind=\"EndpointSlice\"\nI0523 06:31:10.655206       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpathplugin-z4p4x\" objectUID=00b30600-7f49-4312-8e5a-49bdbf6f2ed6 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:10.702384       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpathplugin-75659f46c4\" objectUID=2f10fb59-5405-40d4-bb36-0d138f8e9051 kind=\"ControllerRevision\"\nI0523 06:31:10.702606       1 stateful_set.go:419] StatefulSet has been deleted provisioning-6143-3657/csi-hostpathplugin\nI0523 06:31:10.702660       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpathplugin-0\" objectUID=55f7ea9d-732b-4a72-9966-eb8da02e1e2d kind=\"Pod\"\nI0523 06:31:10.707348       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpathplugin-75659f46c4\" objectUID=2f10fb59-5405-40d4-bb36-0d138f8e9051 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:10.707761       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpathplugin-0\" objectUID=55f7ea9d-732b-4a72-9966-eb8da02e1e2d kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:10.757326       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-provisioner-xvfjn\" objectUID=2d21900d-19b1-462d-8b31-f208f63c77d7 kind=\"EndpointSlice\"\nI0523 06:31:10.762503       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-provisioner-xvfjn\" objectUID=2d21900d-19b1-462d-8b31-f208f63c77d7 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:10.816044       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-provisioner-7fb6f9bff5\" objectUID=9cb9c434-3dd1-4dbd-aa3a-5bf052ff639d kind=\"ControllerRevision\"\nI0523 06:31:10.816243       1 stateful_set.go:419] StatefulSet has been deleted provisioning-6143-3657/csi-hostpath-provisioner\nI0523 06:31:10.816347       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-provisioner-0\" objectUID=7af02aa3-91e4-438c-8cc0-2c82b3b6441e kind=\"Pod\"\nI0523 06:31:10.820546       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-provisioner-0\" objectUID=7af02aa3-91e4-438c-8cc0-2c82b3b6441e kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:10.827929       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-provisioner-7fb6f9bff5\" objectUID=9cb9c434-3dd1-4dbd-aa3a-5bf052ff639d kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:10.855573       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-resizer-fcfsl\" objectUID=11a43195-69b0-4d16-8fe5-2fde3a935997 kind=\"EndpointSlice\"\nI0523 06:31:10.859995       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-resizer-fcfsl\" objectUID=11a43195-69b0-4d16-8fe5-2fde3a935997 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:10.908871       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-resizer-ff974dc4c\" objectUID=b6a443aa-3ceb-4e81-bbc3-449e2de493b0 kind=\"ControllerRevision\"\nI0523 06:31:10.909060       1 stateful_set.go:419] StatefulSet has been deleted provisioning-6143-3657/csi-hostpath-resizer\nI0523 06:31:10.909151       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-resizer-0\" objectUID=26c003e2-0465-451f-a4db-12690b9f0d89 kind=\"Pod\"\nI0523 06:31:10.911073       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-resizer-ff974dc4c\" objectUID=b6a443aa-3ceb-4e81-bbc3-449e2de493b0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:10.911503       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-resizer-0\" objectUID=26c003e2-0465-451f-a4db-12690b9f0d89 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:10.948167       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-snapshotter-5pb9p\" objectUID=5d426e2f-663e-4b6a-962b-e2cd36ef88b0 kind=\"EndpointSlice\"\nI0523 06:31:10.951703       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-snapshotter-5pb9p\" objectUID=5d426e2f-663e-4b6a-962b-e2cd36ef88b0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:10.992419       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-snapshotter-5c9556c695\" objectUID=a4304c1f-553f-4bb3-82b3-d3b63a8c64f0 kind=\"ControllerRevision\"\nI0523 06:31:10.992651       1 stateful_set.go:419] StatefulSet has been deleted provisioning-6143-3657/csi-hostpath-snapshotter\nI0523 06:31:10.992708       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-6143-3657/csi-hostpath-snapshotter-0\" objectUID=eb1d0185-66a6-4dce-b933-6e4dba814688 kind=\"Pod\"\nI0523 06:31:10.994128       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-snapshotter-0\" objectUID=eb1d0185-66a6-4dce-b933-6e4dba814688 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:10.994348       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-6143-3657/csi-hostpath-snapshotter-5c9556c695\" objectUID=a4304c1f-553f-4bb3-82b3-d3b63a8c64f0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:11.111098       1 namespace_controller.go:185] Namespace has been deleted configmap-4776\nE0523 06:31:11.279394       1 tokens_controller.go:261] error synchronizing serviceaccount volume-8206/default: secrets \"default-token-6g9vc\" is forbidden: unable to create new content in namespace volume-8206 because it is being terminated\nI0523 06:31:11.324958       1 namespace_controller.go:185] Namespace has been deleted disruption-2-2906\nI0523 06:31:11.352298       1 namespace_controller.go:185] Namespace has been deleted disruption-5632\nI0523 06:31:11.701725       1 pvc_protection_controller.go:291] PVC provisioning-8165/pvc-jrgtf is unused\nI0523 06:31:11.706856       1 pv_controller.go:633] volume \"local-kfjnf\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:11.709322       1 pv_controller.go:859] volume \"local-kfjnf\" entered phase \"Released\"\nI0523 06:31:11.739541       1 pv_controller_base.go:500] deletion of claim \"provisioning-8165/pvc-jrgtf\" was already processed\nI0523 06:31:11.775201       1 pv_controller.go:859] volume \"local-pvxsp6j\" entered phase \"Available\"\nI0523 06:31:11.797087       1 namespace_controller.go:185] Namespace has been deleted volume-8646\nI0523 06:31:11.806125       1 pv_controller.go:910] claim \"persistent-local-volumes-test-3125/pvc-mf2ww\" bound to volume \"local-pvxsp6j\"\nI0523 06:31:11.812522       1 pv_controller.go:859] volume \"local-pvxsp6j\" entered phase \"Bound\"\nI0523 06:31:11.812543       1 pv_controller.go:962] volume \"local-pvxsp6j\" bound to claim \"persistent-local-volumes-test-3125/pvc-mf2ww\"\nI0523 06:31:11.822200       1 pv_controller.go:803] claim \"persistent-local-volumes-test-3125/pvc-mf2ww\" entered phase \"Bound\"\nI0523 06:31:11.976765       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-981-6724\nE0523 06:31:11.978431       1 tokens_controller.go:261] error synchronizing serviceaccount projected-3732/default: secrets \"default-token-j2spt\" is forbidden: unable to create new content in namespace projected-3732 because it is being terminated\nI0523 06:31:12.106079       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-8699/pod-848bd4fb-28d8-4950-8dd3-b5870b05f838 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-mgc4m pvc- persistent-local-volumes-test-8699 /api/v1/namespaces/persistent-local-volumes-test-8699/persistentvolumeclaims/pvc-mgc4m f0767f7a-516f-4c19-b49c-621db1e9237e 9039 0 2021-05-23 06:30:53 +0000 UTC 2021-05-23 06:31:12 +0000 UTC 0xc002e325e8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:53 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:53 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvpt2pq,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-8699,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:31:12.106143       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-8699/pvc-mgc4m because it is still being used\nI0523 06:31:12.758822       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-500/pvc-nfbjr is unused\nI0523 06:31:12.766047       1 pv_controller.go:633] volume \"pvc-2b6a9c3c-75ab-489e-8bf3-27150ed976d5\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:12.769083       1 pv_controller.go:859] volume \"pvc-2b6a9c3c-75ab-489e-8bf3-27150ed976d5\" entered phase \"Released\"\nI0523 06:31:12.770733       1 pv_controller.go:1321] isVolumeReleased[pvc-2b6a9c3c-75ab-489e-8bf3-27150ed976d5]: volume is released\nI0523 06:31:12.781607       1 pv_controller_base.go:500] deletion of claim \"csi-mock-volumes-500/pvc-nfbjr\" was already processed\nI0523 06:31:13.772222       1 namespace_controller.go:185] Namespace has been deleted provisioning-6143\nI0523 06:31:13.779317       1 event.go:291] \"Event occurred\" object=\"statefulset-3948/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0523 06:31:13.941184       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0aeedb9e5743388a2\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:31:14.197325       1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-6317/quota-not-terminating\nI0523 06:31:14.199447       1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-6317/quota-terminating\nI0523 06:31:14.535243       1 namespace_controller.go:185] Namespace has been deleted downward-api-5987\nI0523 06:31:15.088931       1 pv_controller.go:1321] isVolumeReleased[pvc-c34ad428-2544-4cd7-abef-217162aaecde]: volume is released\nI0523 06:31:15.253105       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-attacher-mjp4h\" objectUID=ce74d470-6581-43ab-8eeb-53183a814dd9 kind=\"EndpointSlice\"\nI0523 06:31:15.256884       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ca-central-1a/vol-0aeedb9e5743388a2\nI0523 06:31:15.256907       1 pv_controller.go:1416] volume \"pvc-c34ad428-2544-4cd7-abef-217162aaecde\" deleted\nE0523 06:31:15.270085       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-8533-markers/default: secrets \"default-token-b4jvr\" is forbidden: unable to create new content in namespace webhook-8533-markers because it is being terminated\nI0523 06:31:15.282082       1 pv_controller_base.go:500] deletion of claim \"volume-2954/awsb9xzg\" was already processed\nI0523 06:31:15.315792       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-attacher-859fddbfd5\" objectUID=db655a90-16c0-4451-a342-5a144e8b2f2a kind=\"ControllerRevision\"\nI0523 06:31:15.315825       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5480-5517/csi-hostpath-attacher\nI0523 06:31:15.315864       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-attacher-0\" objectUID=bd11ccba-7215-4b56-882a-498a7db23287 kind=\"Pod\"\nI0523 06:31:15.390240       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpathplugin-tmm5s\" objectUID=9702d3d4-c246-418e-9017-5973ac36abe3 kind=\"EndpointSlice\"\nI0523 06:31:15.433081       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpathplugin-678f45d96c\" objectUID=797ba3ec-bab4-481c-b390-a1f12d549f87 kind=\"ControllerRevision\"\nI0523 06:31:15.433118       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5480-5517/csi-hostpathplugin\nI0523 06:31:15.433169       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpathplugin-0\" objectUID=c654b215-f7f5-4305-8cbd-ad6e599a5bb6 kind=\"Pod\"\nI0523 06:31:15.470695       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-provisioner-4bvld\" objectUID=9fef706b-925b-4133-9353-05318cfeea3d kind=\"EndpointSlice\"\nI0523 06:31:15.523631       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-provisioner-59cdf96457\" objectUID=a4451956-6668-4605-9232-92c7b888a677 kind=\"ControllerRevision\"\nI0523 06:31:15.523668       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5480-5517/csi-hostpath-provisioner\nI0523 06:31:15.523731       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-provisioner-0\" objectUID=b55daec9-aa1b-4769-8d79-354a7c4b04c3 kind=\"Pod\"\nI0523 06:31:15.563275       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-resizer-nqjpc\" objectUID=fb96c0a9-9651-40e6-97d8-0c6532e6b325 kind=\"EndpointSlice\"\nI0523 06:31:15.605597       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-resizer-0\" objectUID=e5be473a-ab49-42ef-b0af-d2b5d4e21027 kind=\"Pod\"\nI0523 06:31:15.605629       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5480-5517/csi-hostpath-resizer\nI0523 06:31:15.605661       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-resizer-59b9bcb59b\" objectUID=2a05843c-53ac-49b0-8c61-68545f7eb207 kind=\"ControllerRevision\"\nI0523 06:31:15.640789       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-snapshotter-w84bf\" objectUID=6eb229ce-0f96-41f4-948a-e863fc3353a2 kind=\"EndpointSlice\"\nI0523 06:31:15.659726       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-attacher-mjp4h\" objectUID=ce74d470-6581-43ab-8eeb-53183a814dd9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:15.660451       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-provisioner-0\" objectUID=b55daec9-aa1b-4769-8d79-354a7c4b04c3 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:15.660730       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-snapshotter-w84bf\" objectUID=6eb229ce-0f96-41f4-948a-e863fc3353a2 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:15.660903       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-resizer-0\" objectUID=e5be473a-ab49-42ef-b0af-d2b5d4e21027 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:15.661056       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-resizer-nqjpc\" objectUID=fb96c0a9-9651-40e6-97d8-0c6532e6b325 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:15.661210       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-provisioner-4bvld\" objectUID=9fef706b-925b-4133-9353-05318cfeea3d kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:15.661361       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpathplugin-tmm5s\" objectUID=9702d3d4-c246-418e-9017-5973ac36abe3 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:15.661533       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-attacher-0\" objectUID=bd11ccba-7215-4b56-882a-498a7db23287 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:15.661745       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-attacher-859fddbfd5\" objectUID=db655a90-16c0-4451-a342-5a144e8b2f2a kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:15.661906       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-resizer-59b9bcb59b\" objectUID=2a05843c-53ac-49b0-8c61-68545f7eb207 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:15.662057       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpathplugin-678f45d96c\" objectUID=797ba3ec-bab4-481c-b390-a1f12d549f87 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:15.662205       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpathplugin-0\" objectUID=c654b215-f7f5-4305-8cbd-ad6e599a5bb6 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:15.662354       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-provisioner-59cdf96457\" objectUID=a4451956-6668-4605-9232-92c7b888a677 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:15.702945       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-snapshotter-5f8b99ccf8\" objectUID=c401ca69-6768-41db-9098-c71a87f74868 kind=\"ControllerRevision\"\nI0523 06:31:15.703137       1 stateful_set.go:419] StatefulSet has been deleted provisioning-5480-5517/csi-hostpath-snapshotter\nI0523 06:31:15.703194       1 garbagecollector.go:404] \"Processing object\" object=\"provisioning-5480-5517/csi-hostpath-snapshotter-0\" objectUID=55c31676-9390-4735-a4e1-0896a3150bba kind=\"Pod\"\nI0523 06:31:15.710714       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-snapshotter-5f8b99ccf8\" objectUID=c401ca69-6768-41db-9098-c71a87f74868 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:15.710929       1 garbagecollector.go:519] \"Deleting object\" object=\"provisioning-5480-5517/csi-hostpath-snapshotter-0\" objectUID=55c31676-9390-4735-a4e1-0896a3150bba kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:15.740144       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-7796/pvc-sl5k2 is unused\nI0523 06:31:15.743990       1 pv_controller.go:633] volume \"pvc-39d90e63-78db-4a16-a17b-265e6f8aab14\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:15.746918       1 pv_controller.go:859] volume \"pvc-39d90e63-78db-4a16-a17b-265e6f8aab14\" entered phase \"Released\"\nI0523 06:31:15.749691       1 pv_controller.go:1321] isVolumeReleased[pvc-39d90e63-78db-4a16-a17b-265e6f8aab14]: volume is released\nI0523 06:31:15.757798       1 pv_controller_base.go:500] deletion of claim \"csi-mock-volumes-7796/pvc-sl5k2\" was already processed\nI0523 06:31:15.824561       1 namespace_controller.go:185] Namespace has been deleted ingress-4845\nI0523 06:31:15.885983       1 namespace_controller.go:185] Namespace has been deleted downward-api-134\nI0523 06:31:15.973803       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-433\nI0523 06:31:16.394695       1 namespace_controller.go:185] Namespace has been deleted volume-8206\nI0523 06:31:16.578831       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-6934\nI0523 06:31:16.787190       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8182-8749/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nE0523 06:31:17.352144       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-2653/pvc-bn96z: storageclass.storage.k8s.io \"provisioning-2653\" not found\nI0523 06:31:17.352422       1 event.go:291] \"Event occurred\" object=\"provisioning-2653/pvc-bn96z\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2653\\\" not found\"\nI0523 06:31:17.389854       1 pv_controller.go:859] volume \"local-mgg9s\" entered phase \"Available\"\nE0523 06:31:17.571781       1 tokens_controller.go:261] error synchronizing serviceaccount node-lease-test-1034/default: secrets \"default-token-bbnbg\" is forbidden: unable to create new content in namespace node-lease-test-1034 because it is being terminated\nI0523 06:31:17.866196       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:31:17.886536       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:31:17.900395       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-8699/pod-ab728eeb-aa41-4dd6-bd94-25db7b74d1b7 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-mgc4m pvc- persistent-local-volumes-test-8699 /api/v1/namespaces/persistent-local-volumes-test-8699/persistentvolumeclaims/pvc-mgc4m f0767f7a-516f-4c19-b49c-621db1e9237e 9039 0 2021-05-23 06:30:53 +0000 UTC 2021-05-23 06:31:12 +0000 UTC 0xc002e325e8 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:30:53 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:30:53 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvpt2pq,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-8699,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:31:17.900477       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-8699/pvc-mgc4m because it is still being used\nI0523 06:31:17.968882       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-8699/pvc-mgc4m is unused\nE0523 06:31:18.029779       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-500/default: secrets \"default-token-5jkj5\" is forbidden: unable to create new content in namespace csi-mock-volumes-500 because it is being terminated\nI0523 06:31:18.061664       1 pv_controller.go:633] volume \"local-pvpt2pq\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:18.088989       1 pv_controller.go:859] volume \"local-pvpt2pq\" entered phase \"Released\"\nI0523 06:31:18.147731       1 pv_controller_base.go:500] deletion of claim \"persistent-local-volumes-test-8699/pvc-mgc4m\" was already processed\nE0523 06:31:18.451040       1 tokens_controller.go:261] error synchronizing serviceaccount secrets-954/default: secrets \"default-token-j97pl\" is forbidden: unable to create new content in namespace secrets-954 because it is being terminated\nI0523 06:31:18.557014       1 namespace_controller.go:185] Namespace has been deleted provisioning-5480\nE0523 06:31:18.750068       1 tokens_controller.go:261] error synchronizing serviceaccount topology-4959/default: secrets \"default-token-vnlst\" is forbidden: unable to create new content in namespace topology-4959 because it is being terminated\nI0523 06:31:19.001470       1 namespace_controller.go:185] Namespace has been deleted volume-7860-3756\nE0523 06:31:19.052800       1 tokens_controller.go:261] error synchronizing serviceaccount pv-3276/default: secrets \"default-token-qrxnq\" is forbidden: unable to create new content in namespace pv-3276 because it is being terminated\nI0523 06:31:19.271939       1 namespace_controller.go:185] Namespace has been deleted resourcequota-6317\nE0523 06:31:19.334632       1 tokens_controller.go:261] error synchronizing serviceaccount multi-az-6568/default: secrets \"default-token-28ltm\" is forbidden: unable to create new content in namespace multi-az-6568 because it is being terminated\nI0523 06:31:19.783954       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-7976fdbbb8\" objectUID=eb0865d0-d9fc-4dc1-a0e7-135701bbeac0 kind=\"ControllerRevision\"\nI0523 06:31:19.784197       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-500-2090/csi-mockplugin\nI0523 06:31:19.784248       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-0\" objectUID=74412855-785f-4d9f-b684-9c80f12e3eb9 kind=\"Pod\"\nI0523 06:31:19.787096       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-0\" objectUID=74412855-785f-4d9f-b684-9c80f12e3eb9 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:19.787306       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-7976fdbbb8\" objectUID=eb0865d0-d9fc-4dc1-a0e7-135701bbeac0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:20.011761       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-resizer-d5b58968b\" objectUID=c5dd1d05-c408-4073-a56f-ee6f789d4a4d kind=\"ControllerRevision\"\nI0523 06:31:20.011984       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-500-2090/csi-mockplugin-resizer\nI0523 06:31:20.012034       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-resizer-0\" objectUID=c14887e4-4cfa-444f-aaec-ebefd8c76910 kind=\"Pod\"\nI0523 06:31:20.049721       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-resizer-d5b58968b\" objectUID=c5dd1d05-c408-4073-a56f-ee6f789d4a4d kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:20.068614       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-500-2090/csi-mockplugin-resizer-0\" objectUID=c14887e4-4cfa-444f-aaec-ebefd8c76910 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:20.353592       1 namespace_controller.go:185] Namespace has been deleted webhook-8533\nI0523 06:31:20.375572       1 namespace_controller.go:185] Namespace has been deleted webhook-8533-markers\nI0523 06:31:20.663938       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9531/pvc-9jc2f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9531\\\" or manually created by system administrator\"\nI0523 06:31:20.853665       1 pv_controller.go:859] volume \"pvc-2b2dd79b-dbc0-4b42-b08c-40ce86544e89\" entered phase \"Bound\"\nI0523 06:31:20.853698       1 pv_controller.go:962] volume \"pvc-2b2dd79b-dbc0-4b42-b08c-40ce86544e89\" bound to claim \"csi-mock-volumes-9531/pvc-9jc2f\"\nI0523 06:31:20.944961       1 pv_controller.go:803] claim \"csi-mock-volumes-9531/pvc-9jc2f\" entered phase \"Bound\"\nI0523 06:31:21.408271       1 event.go:291] \"Event occurred\" object=\"statefulset-3948/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE0523 06:31:21.760677       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-7796/default: secrets \"default-token-n8ld7\" is forbidden: unable to create new content in namespace csi-mock-volumes-7796 because it is being terminated\nE0523 06:31:22.145407       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-5480-5517/default: secrets \"default-token-4f96f\" is forbidden: unable to create new content in namespace provisioning-5480-5517 because it is being terminated\nI0523 06:31:22.174869       1 namespace_controller.go:185] Namespace has been deleted projected-3732\nI0523 06:31:22.218193       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-8182/pvc-qj6fc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8182\\\" or manually created by system administrator\"\nI0523 06:31:22.259504       1 pv_controller.go:859] volume \"pvc-09e7ed26-3ccf-4eb7-9493-b65dffaa4ffc\" entered phase \"Bound\"\nI0523 06:31:22.259535       1 pv_controller.go:962] volume \"pvc-09e7ed26-3ccf-4eb7-9493-b65dffaa4ffc\" bound to claim \"csi-mock-volumes-8182/pvc-qj6fc\"\nI0523 06:31:22.272119       1 pv_controller.go:803] claim \"csi-mock-volumes-8182/pvc-qj6fc\" entered phase \"Bound\"\nI0523 06:31:22.326548       1 namespace_controller.go:185] Namespace has been deleted provisioning-8165\nE0523 06:31:22.331015       1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-1390/default: secrets \"default-token-shdvb\" is forbidden: unable to create new content in namespace statefulset-1390 because it is being terminated\nI0523 06:31:22.551945       1 event.go:291] \"Event occurred\" object=\"volume-expand-3340/awsm2f9s\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0523 06:31:22.639226       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-1034\nI0523 06:31:23.066625       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2b2dd79b-dbc0-4b42-b08c-40ce86544e89\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9531^4\") from node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:23.239728       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") on node \"ip-172-20-41-57.ca-central-1.compute.internal\" \nI0523 06:31:23.271627       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:23.314830       1 aws.go:1998] Assigned mount device cm -> volume vol-03ca0c16687584cfe\nI0523 06:31:23.340552       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-500\nI0523 06:31:23.361611       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-2b2dd79b-dbc0-4b42-b08c-40ce86544e89\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9531^4\") from node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:23.361685       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9531/pvc-volume-tester-s9nmb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2b2dd79b-dbc0-4b42-b08c-40ce86544e89\\\" \"\nI0523 06:31:23.402888       1 event.go:291] \"Event occurred\" object=\"webhook-7744/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-cbccbf6bb to 1\"\nI0523 06:31:23.403169       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-7744/sample-webhook-deployment-cbccbf6bb\" need=1 creating=1\nI0523 06:31:23.479592       1 namespace_controller.go:185] Namespace has been deleted secrets-954\nI0523 06:31:23.563232       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-7744/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:23.615531       1 namespace_controller.go:185] Namespace has been deleted containers-6592\nI0523 06:31:23.617221       1 aws.go:2411] AttachVolume volume=\"vol-03ca0c16687584cfe\" instance=\"i-03e33b3471bcf6e9f\" request returned {\n  AttachTime: 2021-05-23 06:31:23.604 +0000 UTC,\n  Device: \"/dev/xvdcm\",\n  InstanceId: \"i-03e33b3471bcf6e9f\",\n  State: \"attaching\",\n  VolumeId: \"vol-03ca0c16687584cfe\"\n}\nI0523 06:31:23.653827       1 event.go:291] \"Event occurred\" object=\"webhook-7744/sample-webhook-deployment-cbccbf6bb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-cbccbf6bb-rkqvg\"\nI0523 06:31:23.696066       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-7744/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:23.769647       1 event.go:291] \"Event occurred\" object=\"statefulset-3948/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0523 06:31:23.799555       1 namespace_controller.go:185] Namespace has been deleted topology-4959\nI0523 06:31:23.821612       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-3125/pod-fce96266-31ea-4861-9479-94998b689b4b uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-mf2ww pvc- persistent-local-volumes-test-3125 /api/v1/namespaces/persistent-local-volumes-test-3125/persistentvolumeclaims/pvc-mf2ww 05aa026c-d6cd-46fc-9d73-a98872001dec 9775 0 2021-05-23 06:31:11 +0000 UTC 2021-05-23 06:31:23 +0000 UTC 0xc00330e988 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:31:11 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:31:11 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvxsp6j,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-3125,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:31:23.821703       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-3125/pvc-mf2ww because it is still being used\nI0523 06:31:24.096559       1 namespace_controller.go:185] Namespace has been deleted pv-3276\nI0523 06:31:24.219812       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"svc-latency-3162/svc-latency-rc\" need=1 creating=1\nI0523 06:31:24.393567       1 namespace_controller.go:185] Namespace has been deleted multi-az-6568\nI0523 06:31:24.440694       1 event.go:291] \"Event occurred\" object=\"svc-latency-3162/svc-latency-rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: svc-latency-rc-hx8gs\"\nE0523 06:31:24.848694       1 tokens_controller.go:261] error synchronizing serviceaccount volume-2954/default: secrets \"default-token-nt8w2\" is forbidden: unable to create new content in namespace volume-2954 because it is being terminated\nI0523 06:31:25.454495       1 event.go:291] \"Event occurred\" object=\"ephemeral-348-5093/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0523 06:31:25.492967       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5799/webserver-deployment-dd94f59b7\" need=10 creating=10\nI0523 06:31:25.501927       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-dd94f59b7 to 10\"\nI0523 06:31:25.656031       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5799/webserver-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:25.657105       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-94fgj\"\nE0523 06:31:25.685629       1 tokens_controller.go:261] error synchronizing serviceaccount pods-7017/default: secrets \"default-token-vw5xw\" is forbidden: unable to create new content in namespace pods-7017 because it is being terminated\nI0523 06:31:25.699163       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-5lhq8\"\nI0523 06:31:25.700205       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-bghqb\"\nI0523 06:31:25.712398       1 aws.go:2021] Releasing in-process attachment entry: cm -> volume vol-03ca0c16687584cfe\nI0523 06:31:25.712442       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:25.713763       1 event.go:291] \"Event occurred\" object=\"provisioning-8562/pod-subpath-test-dynamicpv-h8vj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\\\" \"\nI0523 06:31:25.728395       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-5kjhh\"\nI0523 06:31:25.728467       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-chkr2\"\nI0523 06:31:25.735798       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-jbbcm\"\nI0523 06:31:25.736254       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-lkjjz\"\nI0523 06:31:25.758538       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-xzzm4\"\nI0523 06:31:25.758618       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-flqbh\"\nI0523 06:31:25.797302       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-whk6s\"\nI0523 06:31:25.797329       1 event.go:291] \"Event occurred\" object=\"ephemeral-348-5093/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE0523 06:31:26.116727       1 tokens_controller.go:261] error synchronizing serviceaccount projected-6650/default: secrets \"default-token-nc48l\" is forbidden: unable to create new content in namespace projected-6650 because it is being terminated\nI0523 06:31:26.203211       1 event.go:291] \"Event occurred\" object=\"ephemeral-348-5093/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0523 06:31:26.323560       1 event.go:291] \"Event occurred\" object=\"ephemeral-348-5093/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0523 06:31:26.600452       1 event.go:291] \"Event occurred\" object=\"ephemeral-348-5093/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0523 06:31:26.875725       1 tokens_controller.go:261] error synchronizing serviceaccount kubelet-test-6334/default: secrets \"default-token-f6ksp\" is forbidden: unable to create new content in namespace kubelet-test-6334 because it is being terminated\nI0523 06:31:27.093506       1 event.go:291] \"Event occurred\" object=\"job-2400/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-8vmvq\"\nI0523 06:31:27.959557       1 aws_util.go:113] Successfully created EBS Disk volume aws://ca-central-1a/vol-0caa3dbb82efbfdda\nI0523 06:31:28.066691       1 pv_controller.go:1647] volume \"pvc-04f00ffd-2eb4-4b33-bb4c-98a256d920ef\" provisioned for claim \"volume-expand-3340/awsm2f9s\"\nI0523 06:31:28.066903       1 event.go:291] \"Event occurred\" object=\"volume-expand-3340/awsm2f9s\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ProvisioningSucceeded\" message=\"Successfully provisioned volume pvc-04f00ffd-2eb4-4b33-bb4c-98a256d920ef using kubernetes.io/aws-ebs\"\nI0523 06:31:28.096067       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-3125/pod-fce96266-31ea-4861-9479-94998b689b4b uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-mf2ww pvc- persistent-local-volumes-test-3125 /api/v1/namespaces/persistent-local-volumes-test-3125/persistentvolumeclaims/pvc-mf2ww 05aa026c-d6cd-46fc-9d73-a98872001dec 9775 0 2021-05-23 06:31:11 +0000 UTC 2021-05-23 06:31:23 +0000 UTC 0xc00330e988 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:31:11 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:31:11 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvxsp6j,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-3125,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:31:28.096171       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-3125/pvc-mf2ww because it is still being used\nI0523 06:31:28.118403       1 pv_controller.go:859] volume \"pvc-04f00ffd-2eb4-4b33-bb4c-98a256d920ef\" entered phase \"Bound\"\nI0523 06:31:28.118430       1 pv_controller.go:962] volume \"pvc-04f00ffd-2eb4-4b33-bb4c-98a256d920ef\" bound to claim \"volume-expand-3340/awsm2f9s\"\nI0523 06:31:28.213950       1 pv_controller.go:803] claim \"volume-expand-3340/awsm2f9s\" entered phase \"Bound\"\nI0523 06:31:28.426002       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-7796-3761/csi-mockplugin-7c8c84977d\" objectUID=7ec41f42-ae57-4d57-aa60-b4f1fc7528c2 kind=\"ControllerRevision\"\nI0523 06:31:28.426193       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-7796-3761/csi-mockplugin\nI0523 06:31:28.426245       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-7796-3761/csi-mockplugin-0\" objectUID=e880032d-7bc7-451c-b663-d7e5da003a48 kind=\"Pod\"\nI0523 06:31:28.537737       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-7796-3761/csi-mockplugin-7c8c84977d\" objectUID=7ec41f42-ae57-4d57-aa60-b4f1fc7528c2 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:28.552700       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-3125/pod-fce96266-31ea-4861-9479-94998b689b4b uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-mf2ww pvc- persistent-local-volumes-test-3125 /api/v1/namespaces/persistent-local-volumes-test-3125/persistentvolumeclaims/pvc-mf2ww 05aa026c-d6cd-46fc-9d73-a98872001dec 9775 0 2021-05-23 06:31:11 +0000 UTC 2021-05-23 06:31:23 +0000 UTC 0xc00330e988 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:31:11 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:31:11 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvxsp6j,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-3125,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:31:28.552758       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-3125/pvc-mf2ww because it is still being used\nI0523 06:31:28.559013       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-7796-3761/csi-mockplugin-0\" objectUID=e880032d-7bc7-451c-b663-d7e5da003a48 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:28.699176       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-3125/pvc-mf2ww is unused\nI0523 06:31:28.713568       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-04f00ffd-2eb4-4b33-bb4c-98a256d920ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0caa3dbb82efbfdda\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:28.765347       1 aws.go:1998] Assigned mount device ck -> volume vol-0caa3dbb82efbfdda\nI0523 06:31:28.884173       1 pv_controller.go:633] volume \"local-pvxsp6j\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:28.903712       1 pv_controller.go:859] volume \"local-pvxsp6j\" entered phase \"Released\"\nI0523 06:31:28.932758       1 pv_controller_base.go:500] deletion of claim \"persistent-local-volumes-test-3125/pvc-mf2ww\" was already processed\nI0523 06:31:28.948570       1 namespace_controller.go:185] Namespace has been deleted services-3782\nI0523 06:31:28.948589       1 namespace_controller.go:185] Namespace has been deleted statefulset-1390\nI0523 06:31:28.974122       1 namespace_controller.go:185] Namespace has been deleted provisioning-6143-3657\nI0523 06:31:28.974142       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7796\nE0523 06:31:29.039808       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-2021/default: secrets \"default-token-bxns5\" is forbidden: unable to create new content in namespace kubectl-2021 because it is being terminated\nI0523 06:31:29.072017       1 aws.go:2411] AttachVolume volume=\"vol-0caa3dbb82efbfdda\" instance=\"i-03e33b3471bcf6e9f\" request returned {\n  AttachTime: 2021-05-23 06:31:29.062 +0000 UTC,\n  Device: \"/dev/xvdck\",\n  InstanceId: \"i-03e33b3471bcf6e9f\",\n  State: \"attaching\",\n  VolumeId: \"vol-0caa3dbb82efbfdda\"\n}\nI0523 06:31:29.119324       1 pv_controller.go:859] volume \"local-pvrkndv\" entered phase \"Available\"\nI0523 06:31:29.151285       1 pv_controller.go:910] claim \"persistent-local-volumes-test-5695/pvc-8bzhr\" bound to volume \"local-pvrkndv\"\nI0523 06:31:29.183108       1 pv_controller.go:859] volume \"local-pvrkndv\" entered phase \"Bound\"\nI0523 06:31:29.183133       1 pv_controller.go:962] volume \"local-pvrkndv\" bound to claim \"persistent-local-volumes-test-5695/pvc-8bzhr\"\nI0523 06:31:29.208076       1 pv_controller.go:803] claim \"persistent-local-volumes-test-5695/pvc-8bzhr\" entered phase \"Bound\"\nI0523 06:31:30.086521       1 pv_controller.go:910] claim \"provisioning-2653/pvc-bn96z\" bound to volume \"local-mgg9s\"\nI0523 06:31:30.121673       1 pv_controller.go:859] volume \"local-mgg9s\" entered phase \"Bound\"\nI0523 06:31:30.121701       1 pv_controller.go:962] volume \"local-mgg9s\" bound to claim \"provisioning-2653/pvc-bn96z\"\nI0523 06:31:30.156276       1 pv_controller.go:803] claim \"provisioning-2653/pvc-bn96z\" entered phase \"Bound\"\nE0523 06:31:30.544556       1 tokens_controller.go:261] error synchronizing serviceaccount persistent-local-volumes-test-3125/default: secrets \"default-token-xmjbx\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-3125 because it is being terminated\nI0523 06:31:31.184069       1 aws.go:2021] Releasing in-process attachment entry: ck -> volume vol-0caa3dbb82efbfdda\nI0523 06:31:31.184119       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-04f00ffd-2eb4-4b33-bb4c-98a256d920ef\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0caa3dbb82efbfdda\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:31.184662       1 event.go:291] \"Event occurred\" object=\"volume-expand-3340/pod-764bd999-8902-45b5-9526-fcf28ebfe36e\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-04f00ffd-2eb4-4b33-bb4c-98a256d920ef\\\" \"\nI0523 06:31:31.201279       1 pvc_protection_controller.go:291] PVC volume-7994/pvc-kwvdl is unused\nI0523 06:31:31.308690       1 pv_controller.go:633] volume \"local-kvvcf\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:31.323412       1 pv_controller.go:859] volume \"local-kvvcf\" entered phase \"Released\"\nI0523 06:31:31.380416       1 pv_controller_base.go:500] deletion of claim \"volume-7994/pvc-kwvdl\" was already processed\nI0523 06:31:31.484949       1 event.go:291] \"Event occurred\" object=\"job-2400/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: backofflimit-7dnxf\"\nE0523 06:31:31.488087       1 tokens_controller.go:261] error synchronizing serviceaccount custom-resource-definition-3164/default: secrets \"default-token-7pp8p\" is forbidden: unable to create new content in namespace custom-resource-definition-3164 because it is being terminated\nE0523 06:31:31.498851       1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key \"job-2400/backofflimit\"\nE0523 06:31:31.507773       1 job_controller.go:402] Error syncing job: failed pod(s) detected for job key \"job-2400/backofflimit\"\nE0523 06:31:31.880896       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-3889/pvc-8rbqs: storageclass.storage.k8s.io \"provisioning-3889\" not found\nI0523 06:31:31.881187       1 event.go:291] \"Event occurred\" object=\"provisioning-3889/pvc-8rbqs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3889\\\" not found\"\nI0523 06:31:31.918323       1 pv_controller.go:859] volume \"local-pf8j2\" entered phase \"Available\"\nI0523 06:31:32.305300       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-7578/ss2-0\" objectUID=50f0bd77-ef58-4749-8e00-4f89b79b87a6 kind=\"CiliumEndpoint\"\nI0523 06:31:32.307341       1 garbagecollector.go:519] \"Deleting object\" object=\"statefulset-7578/ss2-0\" objectUID=50f0bd77-ef58-4749-8e00-4f89b79b87a6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:32.321916       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0523 06:31:32.359195       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-7578/ss2-1\" objectUID=e3e961ac-302d-400e-b3f9-e45ba3b323fe kind=\"CiliumEndpoint\"\nI0523 06:31:32.372806       1 garbagecollector.go:519] \"Deleting object\" object=\"statefulset-7578/ss2-1\" objectUID=e3e961ac-302d-400e-b3f9-e45ba3b323fe kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:32.451350       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-7578/ss2-2\" objectUID=106128fe-bf6f-4266-b7c2-da68d22c97eb kind=\"CiliumEndpoint\"\nI0523 06:31:32.461650       1 garbagecollector.go:519] \"Deleting object\" object=\"statefulset-7578/ss2-2\" objectUID=106128fe-bf6f-4266-b7c2-da68d22c97eb kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0523 06:31:32.668281       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-333/pvc-tmg9w: storageclass.storage.k8s.io \"provisioning-333\" not found\nI0523 06:31:32.668551       1 event.go:291] \"Event occurred\" object=\"provisioning-333/pvc-tmg9w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-333\\\" not found\"\nI0523 06:31:32.749766       1 pv_controller.go:859] volume \"local-66ksg\" entered phase \"Available\"\nI0523 06:31:32.905576       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8699\nE0523 06:31:33.374825       1 namespace_controller.go:162] deletion of namespace pods-5294 failed: unexpected items still remain in namespace: pods-5294 for gvr: /v1, Resource=pods\nI0523 06:31:33.645780       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-7744/e2e-test-webhook-9sllv\" objectUID=b13bdc8f-f95e-4152-9b03-d2cb2b2b7546 kind=\"EndpointSlice\"\nI0523 06:31:33.688750       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-7744/e2e-test-webhook-9sllv\" objectUID=b13bdc8f-f95e-4152-9b03-d2cb2b2b7546 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:33.917044       1 namespace_controller.go:185] Namespace has been deleted crictl-4490\nI0523 06:31:33.935043       1 namespace_controller.go:185] Namespace has been deleted projected-6650\nE0523 06:31:34.072534       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-7796-3761/default: secrets \"default-token-5m4h6\" is forbidden: unable to create new content in namespace csi-mock-volumes-7796-3761 because it is being terminated\nI0523 06:31:34.189193       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-7744/sample-webhook-deployment-cbccbf6bb\" objectUID=2d648662-d493-48a9-872e-51fa0e04f095 kind=\"ReplicaSet\"\nI0523 06:31:34.189386       1 deployment_controller.go:581] Deployment webhook-7744/sample-webhook-deployment has been deleted\nI0523 06:31:34.196084       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-7744/sample-webhook-deployment-cbccbf6bb\" objectUID=2d648662-d493-48a9-872e-51fa0e04f095 kind=\"ReplicaSet\" propagationPolicy=Background\nI0523 06:31:34.217725       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-7744/sample-webhook-deployment-cbccbf6bb-rkqvg\" objectUID=9cad743a-a2de-423a-9639-38209172608a kind=\"Pod\"\nI0523 06:31:34.235409       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-7744/sample-webhook-deployment-cbccbf6bb-rkqvg\" objectUID=9cad743a-a2de-423a-9639-38209172608a kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:34.254864       1 garbagecollector.go:404] \"Processing object\" object=\"webhook-7744/sample-webhook-deployment-cbccbf6bb-rkqvg\" objectUID=23ea0a16-ce30-4db4-bd0a-ead796b0286c kind=\"CiliumEndpoint\"\nI0523 06:31:34.260920       1 garbagecollector.go:519] \"Deleting object\" object=\"webhook-7744/sample-webhook-deployment-cbccbf6bb-rkqvg\" objectUID=23ea0a16-ce30-4db4-bd0a-ead796b0286c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:34.272822       1 namespace_controller.go:185] Namespace has been deleted volume-2954\nI0523 06:31:34.299076       1 stateful_set_control.go:527] StatefulSet statefulset-3948/ss2 terminating Pod ss2-2 for update\nI0523 06:31:34.313817       1 event.go:291] \"Event occurred\" object=\"statefulset-3948/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0523 06:31:34.328714       1 namespace_controller.go:185] Namespace has been deleted provisioning-4582\nI0523 06:31:34.371740       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-5142\nE0523 06:31:34.486308       1 namespace_controller.go:162] deletion of namespace pods-5294 failed: unexpected items still remain in namespace: pods-5294 for gvr: /v1, Resource=pods\nE0523 06:31:34.607866       1 namespace_controller.go:162] deletion of namespace pods-5294 failed: unexpected items still remain in namespace: pods-5294 for gvr: /v1, Resource=pods\nI0523 06:31:34.678210       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-432/externalname-service\" need=2 creating=2\nI0523 06:31:34.685950       1 event.go:291] \"Event occurred\" object=\"services-432/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-lwhnk\"\nI0523 06:31:34.689832       1 event.go:291] \"Event occurred\" object=\"services-432/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-2lb48\"\nI0523 06:31:34.751946       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-500-2090\nE0523 06:31:34.799601       1 namespace_controller.go:162] deletion of namespace pods-5294 failed: unexpected items still remain in namespace: pods-5294 for gvr: /v1, Resource=pods\nI0523 06:31:34.802567       1 namespace_controller.go:185] Namespace has been deleted projected-3417\nE0523 06:31:34.966614       1 namespace_controller.go:162] deletion of namespace pods-5294 failed: unexpected items still remain in namespace: pods-5294 for gvr: /v1, Resource=pods\nI0523 06:31:34.988294       1 namespace_controller.go:185] Namespace has been deleted kubectl-2021\nE0523 06:31:35.143614       1 namespace_controller.go:162] deletion of namespace pods-5294 failed: unexpected items still remain in namespace: pods-5294 for gvr: /v1, Resource=pods\nI0523 06:31:35.366983       1 namespace_controller.go:185] Namespace has been deleted provisioning-5480-5517\nI0523 06:31:35.402629       1 namespace_controller.go:185] Namespace has been deleted downward-api-6456\nE0523 06:31:35.437027       1 namespace_controller.go:162] deletion of namespace pods-5294 failed: unexpected items still remain in namespace: pods-5294 for gvr: /v1, Resource=pods\nI0523 06:31:35.613043       1 namespace_controller.go:185] Namespace has been deleted ssh-6591\nI0523 06:31:35.840929       1 namespace_controller.go:185] Namespace has been deleted pods-7017\nI0523 06:31:35.963145       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3125\nI0523 06:31:36.047345       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:36.050074       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:36.078892       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5799/webserver-deployment-795d758f88\" need=3 creating=3\nI0523 06:31:36.079106       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 3\"\nI0523 06:31:36.089817       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-rz4xs\"\nI0523 06:31:36.110534       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-5799/webserver-deployment-dd94f59b7\" need=8 deleting=2\nI0523 06:31:36.110563       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-5799/webserver-deployment-dd94f59b7\" relatedReplicaSets=[webserver-deployment-dd94f59b7 webserver-deployment-795d758f88]\nI0523 06:31:36.110644       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-deployment-dd94f59b7\" pod=\"deployment-5799/webserver-deployment-dd94f59b7-lkjjz\"\nI0523 06:31:36.110824       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-deployment-dd94f59b7\" pod=\"deployment-5799/webserver-deployment-dd94f59b7-whk6s\"\nI0523 06:31:36.111336       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-deployment-dd94f59b7 to 8\"\nI0523 06:31:36.112835       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-92bst\"\nI0523 06:31:36.115433       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-mptwj\"\nI0523 06:31:36.128053       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5799/webserver-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:36.150007       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-lkjjz\" objectUID=accdfdbe-c6f3-4025-b911-00c8d734bbd1 kind=\"CiliumEndpoint\"\nI0523 06:31:36.150391       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5799/webserver-deployment\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-deployment-795d758f88\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:36.151173       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-deployment-dd94f59b7-lkjjz\"\nI0523 06:31:36.154767       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-whk6s\" objectUID=2c986669-28aa-4903-93a9-ba0b32c6bdb0 kind=\"CiliumEndpoint\"\nI0523 06:31:36.154904       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-deployment-dd94f59b7-whk6s\"\nI0523 06:31:36.160141       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5799/webserver-deployment-795d758f88\" need=5 creating=2\nI0523 06:31:36.162032       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-lkjjz\" objectUID=accdfdbe-c6f3-4025-b911-00c8d734bbd1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:36.162453       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 5\"\nI0523 06:31:36.169680       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-wxz7q\"\nI0523 06:31:36.169735       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-whk6s\" objectUID=2c986669-28aa-4903-93a9-ba0b32c6bdb0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:36.203850       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-dqnhp\"\nI0523 06:31:36.557012       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-3164\nI0523 06:31:36.659049       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:36.661862       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:37.187833       1 pvc_protection_controller.go:291] PVC csi-mock-volumes-8182/pvc-qj6fc is unused\nI0523 06:31:37.192009       1 pv_controller.go:633] volume \"pvc-09e7ed26-3ccf-4eb7-9493-b65dffaa4ffc\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:37.195106       1 pv_controller.go:859] volume \"pvc-09e7ed26-3ccf-4eb7-9493-b65dffaa4ffc\" entered phase \"Released\"\nI0523 06:31:37.197116       1 pv_controller.go:1321] isVolumeReleased[pvc-09e7ed26-3ccf-4eb7-9493-b65dffaa4ffc]: volume is released\nI0523 06:31:37.273787       1 pv_controller_base.go:500] deletion of claim \"csi-mock-volumes-8182/pvc-qj6fc\" was already processed\nI0523 06:31:37.445440       1 event.go:291] \"Event occurred\" object=\"ephemeral-3638-523/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0523 06:31:37.479462       1 pvc_protection_controller.go:291] PVC provisioning-8562/awszg62z is unused\nI0523 06:31:37.489877       1 pv_controller.go:633] volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" is released and reclaim policy \"Delete\" will be executed\nI0523 06:31:37.493232       1 pv_controller.go:859] volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" entered phase \"Released\"\nI0523 06:31:37.499295       1 pv_controller.go:1321] isVolumeReleased[pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230]: volume is released\nI0523 06:31:37.558606       1 event.go:291] \"Event occurred\" object=\"ephemeral-3638-523/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0523 06:31:37.628236       1 aws_util.go:62] Error deleting EBS Disk volume aws://ca-central-1a/vol-03ca0c16687584cfe: error deleting EBS volume \"vol-03ca0c16687584cfe\" since volume is currently attached to \"i-03e33b3471bcf6e9f\"\nE0523 06:31:37.628296       1 goroutinemap.go:150] Operation for \"delete-pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230[74522242-bccb-4472-b510-aa42f790332f]\" failed. No retries permitted until 2021-05-23 06:31:38.128274713 +0000 UTC m=+478.240668412 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-03ca0c16687584cfe\\\" since volume is currently attached to \\\"i-03e33b3471bcf6e9f\\\"\"\nI0523 06:31:37.628326       1 event.go:291] \"Event occurred\" object=\"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-03ca0c16687584cfe\\\" since volume is currently attached to \\\"i-03e33b3471bcf6e9f\\\"\"\nI0523 06:31:37.684084       1 event.go:291] \"Event occurred\" object=\"ephemeral-3638-523/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0523 06:31:37.766406       1 event.go:291] \"Event occurred\" object=\"ephemeral-3638-523/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0523 06:31:37.847320       1 event.go:291] \"Event occurred\" object=\"ephemeral-3638-523/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0523 06:31:38.448139       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-278-7023/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0523 06:31:38.517929       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5799/webserver-deployment-dd94f59b7\" need=20 creating=12\nI0523 06:31:38.518472       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-dd94f59b7 to 20\"\nI0523 06:31:38.527593       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 13\"\nI0523 06:31:38.527773       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-t57kw\"\nI0523 06:31:38.528037       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5799/webserver-deployment-795d758f88\" need=13 creating=8\nI0523 06:31:38.542667       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-5wjrx\"\nI0523 06:31:38.542795       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-278-7023/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0523 06:31:38.542997       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-wdwhj\"\nI0523 06:31:38.550891       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-t862v\"\nI0523 06:31:38.563165       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-wwqck\"\nI0523 06:31:38.563393       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-fvrxc\"\nI0523 06:31:38.567445       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-8mqpg\"\nI0523 06:31:38.569024       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-2jg5q\"\nI0523 06:31:38.583357       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-5bmbd\"\nI0523 06:31:38.583596       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-ps9bp\"\nI0523 06:31:38.583946       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-j7xp9\"\nI0523 06:31:38.584005       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-9whqt\"\nI0523 06:31:38.584213       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-wfsdx\"\nI0523 06:31:38.586661       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-mlpg7\"\nI0523 06:31:38.653936       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-jvskw\"\nI0523 06:31:38.654180       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-dns8k\"\nI0523 06:31:38.654688       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-zzrtd\"\nI0523 06:31:38.654773       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-jt285\"\nI0523 06:31:38.688847       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-5b7l6\"\nI0523 06:31:38.689856       1 event.go:291] \"Event occurred\" object=\"deployment-5799/webserver-deployment-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-dd94f59b7-pg5xs\"\nI0523 06:31:38.866390       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5799/webserver-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:38.928000       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5799/webserver-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0523 06:31:39.093994       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-7744/default: secrets \"default-token-l2vzv\" is forbidden: unable to create new content in namespace webhook-7744 because it is being terminated\nE0523 06:31:39.121531       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-7744-markers/default: secrets \"default-token-hgw4n\" is forbidden: unable to create new content in namespace webhook-7744-markers because it is being terminated\nI0523 06:31:39.478140       1 namespace_controller.go:185] Namespace has been deleted kubectl-6488\nI0523 06:31:39.624664       1 event.go:291] \"Event occurred\" object=\"statefulset-3948/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0523 06:31:40.634020       1 garbagecollector.go:404] \"Processing object\" object=\"services-9797/pod1\" objectUID=85c1e7ac-e2fc-4b7f-b58e-001b1bf56ef8 kind=\"CiliumEndpoint\"\nI0523 06:31:40.636656       1 garbagecollector.go:519] \"Deleting object\" object=\"services-9797/pod1\" objectUID=85c1e7ac-e2fc-4b7f-b58e-001b1bf56ef8 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:40.881143       1 namespace_controller.go:185] Namespace has been deleted pods-5294\nI0523 06:31:41.002402       1 namespace_controller.go:185] Namespace has been deleted init-container-4639\nE0523 06:31:41.097206       1 tokens_controller.go:261] error synchronizing serviceaccount crd-publish-openapi-5272/default: secrets \"default-token-w68xr\" is forbidden: unable to create new content in namespace crd-publish-openapi-5272 because it is being terminated\nI0523 06:31:41.311649       1 pvc_protection_controller.go:291] PVC volume-9388/pvc-7t8bw is unused\nI0523 06:31:41.326198       1 pv_controller.go:633] volume \"local-qz7nv\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:41.331218       1 pv_controller.go:859] volume \"local-qz7nv\" entered phase \"Released\"\nI0523 06:31:41.348134       1 pv_controller_base.go:500] deletion of claim \"volume-9388/pvc-7t8bw\" was already processed\nI0523 06:31:41.475379       1 aws.go:2275] Waiting for volume \"vol-03ca0c16687584cfe\" state: actual=detaching, desired=detached\nI0523 06:31:41.499325       1 event.go:291] \"Event occurred\" object=\"job-2400/backofflimit\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0523 06:31:41.851424       1 garbagecollector.go:404] \"Processing object\" object=\"services-9797/pod2\" objectUID=2f5581aa-b42e-4e4c-ab77-27ec3edcf307 kind=\"CiliumEndpoint\"\nI0523 06:31:41.854271       1 garbagecollector.go:519] \"Deleting object\" object=\"services-9797/pod2\" objectUID=2f5581aa-b42e-4e4c-ab77-27ec3edcf307 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:41.982060       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"aws-volume-0\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-0d09414535beab08c\") on node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nE0523 06:31:42.424265       1 tokens_controller.go:261] error synchronizing serviceaccount services-6629/default: secrets \"default-token-vnq97\" is forbidden: unable to create new content in namespace services-6629 because it is being terminated\nI0523 06:31:42.431079       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-6248/slow-terminating-unready-pod\" need=1 creating=1\nI0523 06:31:42.439098       1 event.go:291] \"Event occurred\" object=\"services-6248/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: slow-terminating-unready-pod-67g6s\"\nI0523 06:31:42.784120       1 namespace_controller.go:185] Namespace has been deleted volume-7994\nI0523 06:31:43.031706       1 garbagecollector.go:404] \"Processing object\" object=\"services-9797/multi-endpoint-test-7wb8b\" objectUID=13d49ec0-a6fd-421a-972d-034867ece53e kind=\"EndpointSlice\"\nI0523 06:31:43.034883       1 garbagecollector.go:519] \"Deleting object\" object=\"services-9797/multi-endpoint-test-7wb8b\" objectUID=13d49ec0-a6fd-421a-972d-034867ece53e kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:43.248039       1 expand_controller.go:270] Ignoring the PVC \"csi-mock-volumes-9531/pvc-9jc2f\" (uid: \"2b2dd79b-dbc0-4b42-b08c-40ce86544e89\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0523 06:31:43.248296       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9531/pvc-9jc2f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0523 06:31:43.530771       1 aws.go:2501] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-05-23 06:31:23 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcm\",\n  InstanceId: \"i-03e33b3471bcf6e9f\",\n  State: \"detaching\",\n  VolumeId: \"vol-03ca0c16687584cfe\"\n}\nI0523 06:31:43.530826       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" (UniqueName: \"kubernetes.io/aws-ebs/aws://ca-central-1a/vol-03ca0c16687584cfe\") on node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:43.662732       1 pvc_protection_controller.go:303] Pod persistent-local-volumes-test-5695/pod-675cd175-85a1-40d3-b771-09040376c4b1 uses PVC &PersistentVolumeClaim{ObjectMeta:{pvc-8bzhr pvc- persistent-local-volumes-test-5695 /api/v1/namespaces/persistent-local-volumes-test-5695/persistentvolumeclaims/pvc-8bzhr 78621564-2097-46cb-af5f-586bc41864be 12027 0 2021-05-23 06:31:29 +0000 UTC 2021-05-23 06:31:43 +0000 UTC 0xc002998008 map[] map[pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-05-23 06:31:29 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:generateName\":{}},\"f:spec\":{\"f:accessModes\":{},\"f:resources\":{\"f:requests\":{\".\":{},\"f:storage\":{}}},\"f:storageClassName\":{},\"f:volumeMode\":{}}}} {kube-controller-manager Update v1 2021-05-23 06:31:29 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:pv.kubernetes.io/bind-completed\":{},\"f:pv.kubernetes.io/bound-by-controller\":{}}},\"f:spec\":{\"f:volumeName\":{}},\"f:status\":{\"f:accessModes\":{},\"f:capacity\":{\".\":{},\"f:storage\":{}},\"f:phase\":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},},VolumeName:local-pvrkndv,Selector:nil,StorageClassName:*local-volume-test-storageclass-persistent-local-volumes-test-5695,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{2147483648 0} {<nil>} 2Gi BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}\nI0523 06:31:43.662805       1 pvc_protection_controller.go:181] Keeping PVC persistent-local-volumes-test-5695/pvc-8bzhr because it is still being used\nI0523 06:31:44.213002       1 namespace_controller.go:185] Namespace has been deleted webhook-7744-markers\nI0523 06:31:44.217392       1 namespace_controller.go:185] Namespace has been deleted webhook-7744\nI0523 06:31:44.500510       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-3948/ss2-0\" objectUID=1b2ce961-7860-4893-8839-eb7e15fe056f kind=\"CiliumEndpoint\"\nW0523 06:31:44.506366       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"statefulset-3948/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0523 06:31:44.519114       1 event.go:291] \"Event occurred\" object=\"statefulset-3948/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0523 06:31:44.539647       1 garbagecollector.go:404] \"Processing object\" object=\"statefulset-3948/ss2-2\" objectUID=9b7dddce-41ee-442e-9c26-6a4f592bbfd2 kind=\"CiliumEndpoint\"\nI0523 06:31:44.659008       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7796-3761\nI0523 06:31:44.781629       1 event.go:291] \"Event occurred\" object=\"volumemode-2019-777/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0523 06:31:44.920749       1 event.go:291] \"Event occurred\" object=\"volumemode-2019-777/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0523 06:31:44.935779       1 garbagecollector.go:519] \"Deleting object\" object=\"statefulset-3948/ss2-2\" objectUID=9b7dddce-41ee-442e-9c26-6a4f592bbfd2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:44.990257       1 event.go:291] \"Event occurred\" object=\"volumemode-2019-777/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0523 06:31:45.069057       1 event.go:291] \"Event occurred\" object=\"volumemode-2019-777/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0523 06:31:45.089998       1 pv_controller.go:910] claim \"provisioning-3889/pvc-8rbqs\" bound to volume \"local-pf8j2\"\nI0523 06:31:45.096901       1 pv_controller.go:1321] isVolumeReleased[pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230]: volume is released\nI0523 06:31:45.103552       1 pv_controller.go:859] volume \"local-pf8j2\" entered phase \"Bound\"\nI0523 06:31:45.103577       1 pv_controller.go:962] volume \"local-pf8j2\" bound to claim \"provisioning-3889/pvc-8rbqs\"\nI0523 06:31:45.118800       1 pv_controller.go:803] claim \"provisioning-3889/pvc-8rbqs\" entered phase \"Bound\"\nI0523 06:31:45.118893       1 pv_controller.go:910] claim \"provisioning-333/pvc-tmg9w\" bound to volume \"local-66ksg\"\nI0523 06:31:45.125451       1 pv_controller.go:859] volume \"local-66ksg\" entered phase \"Bound\"\nI0523 06:31:45.125475       1 pv_controller.go:962] volume \"local-66ksg\" bound to claim \"provisioning-333/pvc-tmg9w\"\nI0523 06:31:45.131286       1 pv_controller.go:803] claim \"provisioning-333/pvc-tmg9w\" entered phase \"Bound\"\nI0523 06:31:45.153262       1 event.go:291] \"Event occurred\" object=\"volumemode-2019-777/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0523 06:31:45.256181       1 event.go:291] \"Event occurred\" object=\"volumemode-2019/csi-hostpath252pz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volumemode-2019\\\" or manually created by system administrator\"\nI0523 06:31:45.270535       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://ca-central-1a/vol-03ca0c16687584cfe\nI0523 06:31:45.270556       1 pv_controller.go:1416] volume \"pvc-87922de4-dcd2-4e6f-8db6-f3fee73e1230\" deleted\nI0523 06:31:45.277830       1 pv_controller_base.go:500] deletion of claim \"provisioning-8562/awszg62z\" was already processed\nI0523 06:31:45.894119       1 graph_builder.go:510] add [v1/Pod, namespace: ephemeral-348, name: inline-volume-tester-smhqt, uid: 28c99368-f751-4a67-91d0-f46adf1c45e2] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0523 06:31:45.894191       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-348/inline-volume-tester-smhqt\" objectUID=f3334f12-098c-4da7-a361-b1742358ba6d kind=\"CiliumEndpoint\"\nI0523 06:31:45.894613       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-348/inline-volume-tester-smhqt\" objectUID=28c99368-f751-4a67-91d0-f46adf1c45e2 kind=\"Pod\"\nI0523 06:31:45.896840       1 garbagecollector.go:534] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-348, name: inline-volume-tester-smhqt, uid: f3334f12-098c-4da7-a361-b1742358ba6d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-348, name: inline-volume-tester-smhqt, uid: 28c99368-f751-4a67-91d0-f46adf1c45e2] is deletingDependents\nI0523 06:31:45.898657       1 garbagecollector.go:519] \"Deleting object\" object=\"ephemeral-348/inline-volume-tester-smhqt\" objectUID=f3334f12-098c-4da7-a361-b1742358ba6d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:45.902576       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-348/inline-volume-tester-smhqt\" objectUID=28c99368-f751-4a67-91d0-f46adf1c45e2 kind=\"Pod\"\nI0523 06:31:45.902788       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-348/inline-volume-tester-smhqt\" objectUID=f3334f12-098c-4da7-a361-b1742358ba6d kind=\"CiliumEndpoint\"\nI0523 06:31:45.904150       1 garbagecollector.go:529] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-348, name: inline-volume-tester-smhqt, uid: 28c99368-f751-4a67-91d0-f46adf1c45e2]\nI0523 06:31:46.008077       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5799/webserver-deployment-795d758f88\" need=13 creating=1\nI0523 06:31:46.060315       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-795d758f88-92bst\" objectUID=91570c45-60cc-479d-9864-c8cd3f221792 kind=\"CiliumEndpoint\"\nI0523 06:31:46.114390       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-795d758f88-92bst\" objectUID=91570c45-60cc-479d-9864-c8cd3f221792 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.170950       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-795d758f88-dqnhp\" objectUID=42730ef8-dc2b-40dc-8978-0cddc94ed180 kind=\"CiliumEndpoint\"\nE0523 06:31:46.177717       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"webserver-deployment-795d758f88-92bst\", UID:\"91570c45-60cc-479d-9864-c8cd3f221792\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-5799\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-795d758f88-92bst\", UID:\"3fa245ad-b568-4a2d-85be-e2f76d82b665\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0xc0006adc4e)}}}: ciliumendpoints.cilium.io \"webserver-deployment-795d758f88-92bst\" not found\nI0523 06:31:46.182886       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-795d758f88-92bst\" objectUID=91570c45-60cc-479d-9864-c8cd3f221792 kind=\"CiliumEndpoint\"\nI0523 06:31:46.190641       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-795d758f88-dqnhp\" objectUID=42730ef8-dc2b-40dc-8978-0cddc94ed180 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.197028       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-5272\nI0523 06:31:46.285762       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5799/webserver-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:46.345748       1 event.go:291] \"Event occurred\" object=\"volume-3477/nfs65nqs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volume-3477\\\" or manually created by system administrator\"\nI0523 06:31:46.354735       1 event.go:291] \"Event occurred\" object=\"volume-3477/nfs65nqs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volume-3477\\\" or manually created by system administrator\"\nI0523 06:31:46.437590       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-795d758f88-rz4xs\" objectUID=140aef75-ac50-4dc2-9e3d-6ec1f32b8dd4 kind=\"CiliumEndpoint\"\nI0523 06:31:46.443067       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-795d758f88-rz4xs\" objectUID=140aef75-ac50-4dc2-9e3d-6ec1f32b8dd4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.457583       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-795d758f88-wdwhj\" objectUID=16ffaa02-2830-42e1-9d4f-e0201c0b6f63 kind=\"CiliumEndpoint\"\nI0523 06:31:46.466409       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-795d758f88-wdwhj\" objectUID=16ffaa02-2830-42e1-9d4f-e0201c0b6f63 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.469245       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-8182-8749/csi-mockplugin-548b796756\" objectUID=e5ca34f1-d5ec-419e-b87d-44da62d32351 kind=\"ControllerRevision\"\nI0523 06:31:46.469406       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-8182-8749/csi-mockplugin\nI0523 06:31:46.469453       1 garbagecollector.go:404] \"Processing object\" object=\"csi-mock-volumes-8182-8749/csi-mockplugin-0\" objectUID=4008476d-7812-4e7d-b4bb-ecc320c57605 kind=\"Pod\"\nI0523 06:31:46.472300       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-8182-8749/csi-mockplugin-0\" objectUID=4008476d-7812-4e7d-b4bb-ecc320c57605 kind=\"Pod\" propagationPolicy=Background\nI0523 06:31:46.472510       1 garbagecollector.go:519] \"Deleting object\" object=\"csi-mock-volumes-8182-8749/csi-mockplugin-548b796756\" objectUID=e5ca34f1-d5ec-419e-b87d-44da62d32351 kind=\"ControllerRevision\" propagationPolicy=Background\nI0523 06:31:46.496790       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-795d758f88-wxz7q\" objectUID=665c8e4b-d6e6-4d9d-9b2d-7908df3f644e kind=\"CiliumEndpoint\"\nI0523 06:31:46.507452       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-795d758f88-wxz7q\" objectUID=665c8e4b-d6e6-4d9d-9b2d-7908df3f644e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.511615       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-5799/webserver-deployment-dd94f59b7\" need=20 creating=1\nI0523 06:31:46.514873       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0523 06:31:46.556000       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-5kjhh\" objectUID=1d6e2a6c-2ce2-4b2c-bbb9-55749eb5608c kind=\"CiliumEndpoint\"\nI0523 06:31:46.560969       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-5kjhh\" objectUID=1d6e2a6c-2ce2-4b2c-bbb9-55749eb5608c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.568997       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-5lhq8\" objectUID=8c8163d1-5e3e-4ce4-b258-60678bb470af kind=\"CiliumEndpoint\"\nI0523 06:31:46.576465       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-5lhq8\" objectUID=8c8163d1-5e3e-4ce4-b258-60678bb470af kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.584164       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-5wjrx\" objectUID=275e3607-bc7d-48a2-a3ab-5a1dfc7e0bba kind=\"CiliumEndpoint\"\nI0523 06:31:46.588911       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-5wjrx\" objectUID=275e3607-bc7d-48a2-a3ab-5a1dfc7e0bba kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.635700       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-94fgj\" objectUID=9eef0aed-6b3d-4dd3-8758-9421f41d3f0e kind=\"CiliumEndpoint\"\nI0523 06:31:46.638650       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-94fgj\" objectUID=9eef0aed-6b3d-4dd3-8758-9421f41d3f0e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.648804       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-bghqb\" objectUID=d209dc00-fd0b-465d-aab0-502ae7818e2e kind=\"CiliumEndpoint\"\nI0523 06:31:46.653692       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-bghqb\" objectUID=d209dc00-fd0b-465d-aab0-502ae7818e2e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.666443       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-chkr2\" objectUID=aff0360a-58a2-492a-9af3-357b660d24ed kind=\"CiliumEndpoint\"\nI0523 06:31:46.673230       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-chkr2\" objectUID=aff0360a-58a2-492a-9af3-357b660d24ed kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.698377       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-flqbh\" objectUID=1b7c2679-3fb3-41be-9d87-3dc4b3c8e951 kind=\"CiliumEndpoint\"\nI0523 06:31:46.700735       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-flqbh\" objectUID=1b7c2679-3fb3-41be-9d87-3dc4b3c8e951 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.706006       1 namespace_controller.go:185] Namespace has been deleted configmap-4937\nI0523 06:31:46.707150       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-jbbcm\" objectUID=7f34688a-b1bf-4005-81dd-83c1e5699fd3 kind=\"CiliumEndpoint\"\nI0523 06:31:46.711573       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-jbbcm\" objectUID=7f34688a-b1bf-4005-81dd-83c1e5699fd3 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.774669       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-t57kw\" objectUID=ab0df22b-7364-414c-b4f1-269ada5fa2cc kind=\"CiliumEndpoint\"\nI0523 06:31:46.779978       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-t57kw\" objectUID=ab0df22b-7364-414c-b4f1-269ada5fa2cc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.786769       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-t862v\" objectUID=89ec7b73-4688-47fd-8e00-d2e0b8c0cde6 kind=\"CiliumEndpoint\"\nI0523 06:31:46.798058       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-t862v\" objectUID=89ec7b73-4688-47fd-8e00-d2e0b8c0cde6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.835852       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-xzzm4\" objectUID=bafde97b-cdfb-49f9-98b5-14b668371b1e kind=\"CiliumEndpoint\"\nI0523 06:31:46.844614       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-5799/webserver-deployment-dd94f59b7-xzzm4\" objectUID=bafde97b-cdfb-49f9-98b5-14b668371b1e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:46.997618       1 namespace_controller.go:185] Namespace has been deleted services-3324\nI0523 06:31:47.185859       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-3444-2709/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nE0523 06:31:47.241889       1 tokens_controller.go:261] error synchronizing serviceaccount svc-latency-3162/default: secrets \"default-token-jw57p\" is forbidden: unable to create new content in namespace svc-latency-3162 because it is being terminated\nI0523 06:31:47.246818       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-3444-2709/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0523 06:31:47.400677       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-9646\nE0523 06:31:47.418380       1 tokens_controller.go:261] error synchronizing serviceaccount deployment-5799/default: secrets \"default-token-5g2rp\" is forbidden: unable to create new content in namespace deployment-5799 because it is being terminated\nI0523 06:31:47.499672       1 deployment_controller.go:581] Deployment deployment-5799/webserver-deployment has been deleted\nI0523 06:31:47.519662       1 namespace_controller.go:185] Namespace has been deleted services-6629\nI0523 06:31:47.904553       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-2c88s-l6rfj\" objectUID=5b7ddc2a-6982-47d6-9c50-bc10d413c27b kind=\"EndpointSlice\"\nI0523 06:31:47.907097       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-2c88s-l6rfj\" objectUID=5b7ddc2a-6982-47d6-9c50-bc10d413c27b kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:47.913588       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-2jvz2-4q8kl\" objectUID=2ce5c1df-4368-44f3-919a-53dd92b8616e kind=\"EndpointSlice\"\nI0523 06:31:47.916662       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-2jvz2-4q8kl\" objectUID=2ce5c1df-4368-44f3-919a-53dd92b8616e kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:47.925597       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-2lzgb-d8dv9\" objectUID=8bc12b3d-38e1-4388-9a3f-c32ca5a94b7e kind=\"EndpointSlice\"\nI0523 06:31:47.927101       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-2lzgb-d8dv9\" objectUID=8bc12b3d-38e1-4388-9a3f-c32ca5a94b7e kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:47.935427       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-2nd5s-7rhnh\" objectUID=e6f07540-7632-4ac2-a8b1-87bc271d99b8 kind=\"EndpointSlice\"\nI0523 06:31:47.937030       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-2nd5s-7rhnh\" objectUID=e6f07540-7632-4ac2-a8b1-87bc271d99b8 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:47.949836       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-2qfg2-9b7js\" objectUID=c5c89b5c-0985-43c4-800a-74edc57dd543 kind=\"EndpointSlice\"\nI0523 06:31:47.957572       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-2qfg2-9b7js\" objectUID=c5c89b5c-0985-43c4-800a-74edc57dd543 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:47.983393       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-489tl-qwr7t\" objectUID=442e1769-dcca-49af-a320-fb9b601cb02d kind=\"EndpointSlice\"\nI0523 06:31:47.989056       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-489tl-qwr7t\" objectUID=442e1769-dcca-49af-a320-fb9b601cb02d kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:47.996896       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-48cn2-lsq5c\" objectUID=0ebfbc12-35fa-4f65-ba6d-daf81fbe03ff kind=\"EndpointSlice\"\nI0523 06:31:47.999253       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-48cn2-lsq5c\" objectUID=0ebfbc12-35fa-4f65-ba6d-daf81fbe03ff kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.013460       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-4rbwz-zrwsc\" objectUID=d620370c-826a-405c-a283-3cd4c21d927b kind=\"EndpointSlice\"\nI0523 06:31:48.015469       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-4rbwz-zrwsc\" objectUID=d620370c-826a-405c-a283-3cd4c21d927b kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.022955       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-4rmxv-slpbs\" objectUID=c8d596c7-7f8c-4eef-a1fb-a964b08c2fe1 kind=\"EndpointSlice\"\nI0523 06:31:48.024372       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-4rmxv-slpbs\" objectUID=c8d596c7-7f8c-4eef-a1fb-a964b08c2fe1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.032372       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-5cdlx-6vshn\" objectUID=10d5f133-b12f-4558-abfc-3fedc9af7f3f kind=\"EndpointSlice\"\nI0523 06:31:48.043467       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-5cdlx-6vshn\" objectUID=10d5f133-b12f-4558-abfc-3fedc9af7f3f kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.058645       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-5mdmf-rsqpw\" objectUID=4adda269-4691-4eeb-93e9-0da10c49df0a kind=\"EndpointSlice\"\nI0523 06:31:48.060462       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-5mdmf-rsqpw\" objectUID=4adda269-4691-4eeb-93e9-0da10c49df0a kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.069573       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-5nnxl-wrm49\" objectUID=c9804c1e-3ddc-488f-83c9-c2b3ba48acc0 kind=\"EndpointSlice\"\nI0523 06:31:48.072047       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-5nnxl-wrm49\" objectUID=c9804c1e-3ddc-488f-83c9-c2b3ba48acc0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.082538       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-5qn4t-b29bh\" objectUID=b0d8bdd8-20ce-4651-988e-312d1057d58f kind=\"EndpointSlice\"\nI0523 06:31:48.083763       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-5qn4t-b29bh\" objectUID=b0d8bdd8-20ce-4651-988e-312d1057d58f kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.091570       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-5vtjt-88qmx\" objectUID=09ac601f-41eb-4e3e-851d-fca2744e2a58 kind=\"EndpointSlice\"\nI0523 06:31:48.093518       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-5vtjt-88qmx\" objectUID=09ac601f-41eb-4e3e-851d-fca2744e2a58 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.105431       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-697p2-znhjc\" objectUID=8a6e3d08-6c72-4a9a-98af-bb98b9ea0d1b kind=\"EndpointSlice\"\nI0523 06:31:48.109316       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-697p2-znhjc\" objectUID=8a6e3d08-6c72-4a9a-98af-bb98b9ea0d1b kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.124770       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-6j5nx-j6rqk\" objectUID=13a65ac9-45d0-4d2f-acd3-a285311bcaa7 kind=\"EndpointSlice\"\nI0523 06:31:48.143505       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-6jkp4-bk42j\" objectUID=c948716d-9f57-4d12-a132-5d94aad9295e kind=\"EndpointSlice\"\nI0523 06:31:48.155250       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-6x4hd-6khrr\" objectUID=37aed614-1a14-4697-8923-fc3ecb29827d kind=\"EndpointSlice\"\nI0523 06:31:48.169240       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-6zvpt-zz7hz\" objectUID=b4e14a98-b7a3-4931-8da9-ffaba513c618 kind=\"EndpointSlice\"\nI0523 06:31:48.181637       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-72c57-md5dw\" objectUID=f8267b2c-3d9e-4a6a-80d9-ffd258a94f39 kind=\"EndpointSlice\"\nI0523 06:31:48.190594       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-762h2-dnn45\" objectUID=f4b25fff-b569-4aed-b6c0-18fdd3c8dcc4 kind=\"EndpointSlice\"\nI0523 06:31:48.198703       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-6j5nx-j6rqk\" objectUID=13a65ac9-45d0-4d2f-acd3-a285311bcaa7 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.206536       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-76vm7-bwwdf\" objectUID=20d2f785-0ee5-48d9-a1da-b978c6f8bef9 kind=\"EndpointSlice\"\nI0523 06:31:48.223319       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-794mv-dshs6\" objectUID=e7364ecc-1e7e-445c-a975-4d6103561102 kind=\"EndpointSlice\"\nE0523 06:31:48.227150       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:31:48.233635       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7cwbl-h2fvk\" objectUID=59b48da4-59fb-4170-aee8-68399ac36498 kind=\"EndpointSlice\"\nI0523 06:31:48.245635       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7fftc-2gqmb\" objectUID=14328f69-594b-4236-842c-fe6f09f5625d kind=\"EndpointSlice\"\nI0523 06:31:48.255658       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-6jkp4-bk42j\" objectUID=c948716d-9f57-4d12-a132-5d94aad9295e kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.264038       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7jbvk-znt56\" objectUID=7ce0c824-6a54-41ee-af70-38f54a5bfef0 kind=\"EndpointSlice\"\nI0523 06:31:48.277306       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7kpvt-ts2sp\" objectUID=44dd474f-237c-4090-a582-ab4df805a953 kind=\"EndpointSlice\"\nI0523 06:31:48.296575       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-6x4hd-6khrr\" objectUID=37aed614-1a14-4697-8923-fc3ecb29827d kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.299235       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7kqsk-6s5t6\" objectUID=4d7d5a0d-2200-4001-97cf-dc8ccf7487c4 kind=\"EndpointSlice\"\nI0523 06:31:48.313404       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7n8lc-2892s\" objectUID=7112be59-e1e1-4661-b9e4-16b0996310ea kind=\"EndpointSlice\"\nE0523 06:31:48.315054       1 tokens_controller.go:261] error synchronizing serviceaccount services-9797/default: secrets \"default-token-z786h\" is forbidden: unable to create new content in namespace services-9797 because it is being terminated\nI0523 06:31:48.328178       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7n9tm-6w79q\" objectUID=5b423449-1cef-4637-8bc4-5b1a2c04f4e9 kind=\"EndpointSlice\"\nI0523 06:31:48.358708       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-6zvpt-zz7hz\" objectUID=b4e14a98-b7a3-4931-8da9-ffaba513c618 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.365446       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7qp2m-ktfmz\" objectUID=3825aeda-dec5-4f40-8407-da4ed263fcfc kind=\"EndpointSlice\"\nI0523 06:31:48.377837       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-7whmx-xh78p\" objectUID=7af7d8fe-171b-4da6-82b0-a3bf56690840 kind=\"EndpointSlice\"\nI0523 06:31:48.393081       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-855bm-sdzdx\" objectUID=40cfef6b-5b11-4394-be5c-1397758dcf64 kind=\"EndpointSlice\"\nI0523 06:31:48.409870       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-72c57-md5dw\" objectUID=f8267b2c-3d9e-4a6a-80d9-ffd258a94f39 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.446263       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-86q44-vt2p5\" objectUID=641e2f74-49ff-4cbb-906d-26a3f7f6bdf8 kind=\"EndpointSlice\"\nI0523 06:31:48.456047       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-762h2-dnn45\" objectUID=f4b25fff-b569-4aed-b6c0-18fdd3c8dcc4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.466628       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-87hmv-xzs5g\" objectUID=38be7489-927e-45dd-848c-25cd893e3c7a kind=\"EndpointSlice\"\nI0523 06:31:48.507793       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8bstd-47vdl\" objectUID=5394fcb5-d816-4e64-bfbd-d7960777ba39 kind=\"EndpointSlice\"\nI0523 06:31:48.550923       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-76vm7-bwwdf\" objectUID=20d2f785-0ee5-48d9-a1da-b978c6f8bef9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.599030       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-794mv-dshs6\" objectUID=e7364ecc-1e7e-445c-a975-4d6103561102 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.649544       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7cwbl-h2fvk\" objectUID=59b48da4-59fb-4170-aee8-68399ac36498 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.709115       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7fftc-2gqmb\" objectUID=14328f69-594b-4236-842c-fe6f09f5625d kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.750613       1 resource_quota_controller.go:306] Resource quota has been deleted kubectl-5140/scopes\nE0523 06:31:48.750710       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-5140/default: secrets \"default-token-8gl22\" is forbidden: unable to create new content in namespace kubectl-5140 because it is being terminated\nI0523 06:31:48.752245       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8db9r-kp8mr\" objectUID=605dc2ab-54d5-4060-a61d-076e8bbab6ce kind=\"EndpointSlice\"\nI0523 06:31:48.799548       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7jbvk-znt56\" objectUID=7ce0c824-6a54-41ee-af70-38f54a5bfef0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.847574       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7kpvt-ts2sp\" objectUID=44dd474f-237c-4090-a582-ab4df805a953 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.902017       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8dlxl-mvnc6\" objectUID=238cbf06-92e8-4e1d-bf24-068648ee98e9 kind=\"EndpointSlice\"\nI0523 06:31:48.948652       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7kqsk-6s5t6\" objectUID=4d7d5a0d-2200-4001-97cf-dc8ccf7487c4 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:48.968004       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=6 creating=6\nI0523 06:31:48.968622       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 6\"\nI0523 06:31:48.981207       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-lrvzd\"\nI0523 06:31:48.991046       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:48.991405       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-7btns\"\nI0523 06:31:48.991437       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-9gnlq\"\nI0523 06:31:49.000550       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-5vmjk\"\nI0523 06:31:49.007311       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-7v8gv\"\nI0523 06:31:49.012027       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7n8lc-2892s\" objectUID=7112be59-e1e1-4661-b9e4-16b0996310ea kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.016556       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-vnqt2\"\nI0523 06:31:49.062146       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=6 creating=1\nI0523 06:31:49.062394       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7n9tm-6w79q\" objectUID=5b423449-1cef-4637-8bc4-5b1a2c04f4e9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.069021       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-qjnqw\"\nI0523 06:31:49.102791       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8f2zb-xm6s6\" objectUID=ecb70c41-7edc-4162-bbba-5300b6d655d7 kind=\"EndpointSlice\"\nI0523 06:31:49.104735       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=6 creating=1\nI0523 06:31:49.110515       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-46tk8\"\nI0523 06:31:49.146308       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7qp2m-ktfmz\" objectUID=3825aeda-dec5-4f40-8407-da4ed263fcfc kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.153731       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=6 creating=1\nI0523 06:31:49.161250       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-2z546\"\nI0523 06:31:49.197779       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-7whmx-xh78p\" objectUID=7af7d8fe-171b-4da6-82b0-a3bf56690840 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.246279       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-855bm-sdzdx\" objectUID=40cfef6b-5b11-4394-be5c-1397758dcf64 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.299526       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8mbfh-h5p6z\" objectUID=e753297f-c980-43bf-983f-46d26fae41ac kind=\"EndpointSlice\"\nI0523 06:31:49.346901       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-86q44-vt2p5\" objectUID=641e2f74-49ff-4cbb-906d-26a3f7f6bdf8 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.405070       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8r87v-lzbq8\" objectUID=2ec2cb5b-348b-4a83-8daf-d3a346afafe8 kind=\"EndpointSlice\"\nI0523 06:31:49.417990       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"DeploymentRollbackRevisionNotFound\" message=\"Unable to find last revision.\"\nI0523 06:31:49.448470       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-87hmv-xzs5g\" objectUID=38be7489-927e-45dd-848c-25cd893e3c7a kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.462809       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8182\nI0523 06:31:49.526697       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-8bstd-47vdl\" objectUID=5394fcb5-d816-4e64-bfbd-d7960777ba39 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.572653       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8v2bp-njxmb\" objectUID=31a65f2f-f344-4891-b4ce-43a035b98e72 kind=\"EndpointSlice\"\nI0523 06:31:49.620661       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-978jj-mtsh5\" objectUID=f94fc4f8-1b69-4396-bdc7-d53c04fb35bd kind=\"EndpointSlice\"\nI0523 06:31:49.667607       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-97k49-rt9ss\" objectUID=46ee110b-b923-4db2-937d-c4fedce13e3e kind=\"EndpointSlice\"\nI0523 06:31:49.723626       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"DeploymentRollbackRevisionNotFound\" message=\"Unable to find last revision.\"\nI0523 06:31:49.754726       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-97t8v-dv59c\" objectUID=da8b7000-4ca6-4dfe-a2bd-7288f5f15058 kind=\"EndpointSlice\"\nI0523 06:31:49.755000       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-8db9r-kp8mr\" objectUID=605dc2ab-54d5-4060-a61d-076e8bbab6ce kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.801247       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-99s48-mfcv2\" objectUID=26899661-f728-4510-b3f4-063dd87adf45 kind=\"EndpointSlice\"\nI0523 06:31:49.849397       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-9bsbz-vz7m2\" objectUID=70db2f9e-71ca-4e12-bf58-9abb3d0addd2 kind=\"EndpointSlice\"\nI0523 06:31:49.896085       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-8dlxl-mvnc6\" objectUID=238cbf06-92e8-4e1d-bf24-068648ee98e9 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:49.951903       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-9t8lm-v69mv\" objectUID=94fddf03-37cf-4b7e-ab8b-c392dfc11b9f kind=\"EndpointSlice\"\nI0523 06:31:49.998910       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-9tbqr-9zlbk\" objectUID=c6825b6e-b3db-4762-b9d3-2525f8c1247a kind=\"EndpointSlice\"\nI0523 06:31:50.052214       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-9wphh-l9wxp\" objectUID=aaa66078-b83c-4a38-a48b-8ae5348fe341 kind=\"EndpointSlice\"\nI0523 06:31:50.096189       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-8f2zb-xm6s6\" objectUID=ecb70c41-7edc-4162-bbba-5300b6d655d7 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:50.152105       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-b4wh6-ndlnh\" objectUID=a2b553b5-c750-4635-bba1-329282108e88 kind=\"EndpointSlice\"\nI0523 06:31:50.162444       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=5 deleting=1\nI0523 06:31:50.162472       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7]\nI0523 06:31:50.162548       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-1084/webserver-dd94f59b7-46tk8\"\nI0523 06:31:50.163077       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 5\"\nI0523 06:31:50.176807       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:50.180702       1 operation_generator.go:1433] ExpandVolume succeeded for volume volume-expand-3340/awsm2f9s\nI0523 06:31:50.186262       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-46tk8\"\nI0523 06:31:50.190132       1 operation_generator.go:1445] ExpandVolume.UpdatePV succeeded for volume volume-expand-3340/awsm2f9s\nI0523 06:31:50.202295       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-b8s9n-tzftm\" objectUID=73b374b0-725b-435c-979b-edb54d737b93 kind=\"EndpointSlice\"\nI0523 06:31:50.253072       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-bgtgv-2qkps\" objectUID=35e5b4ae-2edc-4ea0-885d-f37fc2a367e7 kind=\"EndpointSlice\"\nI0523 06:31:50.298141       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-8mbfh-h5p6z\" objectUID=e753297f-c980-43bf-983f-46d26fae41ac kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:50.348972       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-bmp56-bwrxs\" objectUID=569a9cf5-80f6-4511-89c6-4d083863f35a kind=\"EndpointSlice\"\nI0523 06:31:50.397437       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-8r87v-lzbq8\" objectUID=2ec2cb5b-348b-4a83-8daf-d3a346afafe8 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:50.408248       1 pv_controller.go:859] volume \"pvc-0a5abab8-14c4-426f-bdcd-7233359964f4\" entered phase \"Bound\"\nI0523 06:31:50.408278       1 pv_controller.go:962] volume \"pvc-0a5abab8-14c4-426f-bdcd-7233359964f4\" bound to claim \"volume-3477/nfs65nqs\"\nI0523 06:31:50.417650       1 pv_controller.go:803] claim \"volume-3477/nfs65nqs\" entered phase \"Bound\"\nI0523 06:31:50.451756       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-bmqws-4r8jx\" objectUID=22d203e9-5473-431c-ab18-007799510104 kind=\"EndpointSlice\"\nI0523 06:31:50.503377       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-bvb7l-rtk58\" objectUID=a9226530-f0ce-4c53-97f5-211e23fe0e57 kind=\"EndpointSlice\"\nI0523 06:31:50.548810       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-8v2bp-njxmb\" objectUID=31a65f2f-f344-4891-b4ce-43a035b98e72 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:50.600805       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-978jj-mtsh5\" objectUID=f94fc4f8-1b69-4396-bdc7-d53c04fb35bd kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:50.646674       1 garbagecollector.go:519] \"Deleting object\" object=\"svc-latency-3162/latency-svc-97k49-rt9ss\" objectUID=46ee110b-b923-4db2-937d-c4fedce13e3e kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:50.696022       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-bvthv-592v9\" objectUID=c0101739-a42a-4141-aee9-d912ec60043b kind=\"EndpointSlice\"\nE0523 06:31:50.747468       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-8db9r-kp8mr\", UID:\"605dc2ab-54d5-4060-a61d-076e8bbab6ce\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-8db9r\", UID:\"e25ee398-9bee-4921-a451-4b5df7afeba5\", Controller:(*bool)(0xc003213c1a), BlockOwnerDeletion:(*bool)(0xc003213c1b)}}}: endpointslices.discovery.k8s.io \"latency-svc-8db9r-kp8mr\" not found\nI0523 06:31:50.747525       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-c4p9z-xrszf\" objectUID=d67feedc-bb33-4426-a4c1-aaad7d18d220 kind=\"EndpointSlice\"\nI0523 06:31:50.798270       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-cfx8z-xh2sg\" objectUID=c06ef0a3-a22e-46be-b9ee-017672918b24 kind=\"EndpointSlice\"\nI0523 06:31:50.823579       1 pvc_protection_controller.go:291] PVC persistent-local-volumes-test-5695/pvc-8bzhr is unused\nI0523 06:31:50.838651       1 pv_controller.go:633] volume \"local-pvrkndv\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:31:50.842403       1 pv_controller.go:859] volume \"local-pvrkndv\" entered phase \"Released\"\nI0523 06:31:50.854015       1 pv_controller_base.go:500] deletion of claim \"persistent-local-volumes-test-5695/pvc-8bzhr\" was already processed\nI0523 06:31:50.854543       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-cj6cp-n9jcm\" objectUID=e9c85ebd-e067-4ef8-8886-a4455d8e7a91 kind=\"EndpointSlice\"\nE0523 06:31:50.909320       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-8dlxl-mvnc6\", UID:\"238cbf06-92e8-4e1d-bf24-068648ee98e9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-8dlxl\", UID:\"ea1500ed-10ca-4a95-8b11-dabd26748fd3\", Controller:(*bool)(0xc003488a5e), BlockOwnerDeletion:(*bool)(0xc003488a5f)}}}: endpointslices.discovery.k8s.io \"latency-svc-8dlxl-mvnc6\" not found\nI0523 06:31:50.909357       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-cj6cx-wnjz7\" objectUID=bf4b5c85-eda1-44ab-8d6c-8e01b8f39f8b kind=\"EndpointSlice\"\nI0523 06:31:50.950451       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-cksd9-7jqcn\" objectUID=129d229e-d5c2-4826-bf73-ba72649cdb8e kind=\"EndpointSlice\"\nI0523 06:31:50.997446       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-d24mp-rmmwb\" objectUID=fa68ca88-978b-4aa1-b654-17cee09331e7 kind=\"EndpointSlice\"\nI0523 06:31:51.046952       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-d67lr-mvwkm\" objectUID=6d7590f8-23ef-4f29-8953-88097a80d8d3 kind=\"EndpointSlice\"\nE0523 06:31:51.089143       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nE0523 06:31:51.100712       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-8f2zb-xm6s6\", UID:\"ecb70c41-7edc-4162-bbba-5300b6d655d7\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-8f2zb\", UID:\"e7af5115-aa8e-400a-af94-39399cc49063\", Controller:(*bool)(0xc002688e6e), BlockOwnerDeletion:(*bool)(0xc002688e6f)}}}: endpointslices.discovery.k8s.io \"latency-svc-8f2zb-xm6s6\" not found\nI0523 06:31:51.100750       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-df9l8-t9rlx\" objectUID=f2f6a901-4afd-4e43-8996-c884d391a3c1 kind=\"EndpointSlice\"\nI0523 06:31:51.158260       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-dhlhz-vwzwh\" objectUID=f36ba7ff-72e0-4a80-b3fa-e5ddab48ee4c kind=\"EndpointSlice\"\nI0523 06:31:51.197279       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-drgk4-l5r5w\" objectUID=107d0021-dcc8-4462-8f22-9eb080b80d68 kind=\"EndpointSlice\"\nI0523 06:31:51.246116       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-dvmzc-kczkq\" objectUID=bd1c6fb2-c656-483e-b1df-8b1bccf72fa7 kind=\"EndpointSlice\"\nE0523 06:31:51.250667       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nE0523 06:31:51.298066       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-8mbfh-h5p6z\", UID:\"e753297f-c980-43bf-983f-46d26fae41ac\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-8mbfh\", UID:\"60820354-7fbc-4321-bf5f-02c28b9e5607\", Controller:(*bool)(0xc002cad8ea), BlockOwnerDeletion:(*bool)(0xc002cad8eb)}}}: endpointslices.discovery.k8s.io \"latency-svc-8mbfh-h5p6z\" not found\nI0523 06:31:51.298103       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-dwb92-njpsh\" objectUID=28d5fa05-240e-47ed-b352-9b83d72bd6cd kind=\"EndpointSlice\"\nI0523 06:31:51.345891       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-dxcxn-458vj\" objectUID=6104f27e-79a7-4f0c-8310-5909a4efd90d kind=\"EndpointSlice\"\nI0523 06:31:51.373526       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"svc-latency-3162/svc-latency-rc\" need=1 creating=1\nE0523 06:31:51.396177       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-8r87v-lzbq8\", UID:\"2ec2cb5b-348b-4a83-8daf-d3a346afafe8\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-8r87v\", UID:\"8843f94d-0ea5-4420-8a20-34dea5b444eb\", Controller:(*bool)(0xc000d401da), BlockOwnerDeletion:(*bool)(0xc000d401db)}}}: endpointslices.discovery.k8s.io \"latency-svc-8r87v-lzbq8\" not found\nI0523 06:31:51.396216       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-f5djf-4z2f8\" objectUID=d0dc1f98-44d1-4714-ab06-d7b3fd05ee89 kind=\"EndpointSlice\"\nE0523 06:31:51.438523       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:51.446800       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-f5xbs-cg4f4\" objectUID=4b423e8e-d5c0-4d9e-a3b8-3789e1060dcf kind=\"EndpointSlice\"\nE0523 06:31:51.456417       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:51.497002       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-f6jt5-r95s4\" objectUID=68143cb5-08b9-47e2-8be5-703e35e57d8a kind=\"EndpointSlice\"\nE0523 06:31:51.546255       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-8v2bp-njxmb\", UID:\"31a65f2f-f344-4891-b4ce-43a035b98e72\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-8v2bp\", UID:\"90ad4da5-2617-4e48-b3a5-f8b10b9eda97\", Controller:(*bool)(0xc001ba68ce), BlockOwnerDeletion:(*bool)(0xc001ba68cf)}}}: endpointslices.discovery.k8s.io \"latency-svc-8v2bp-njxmb\" not found\nI0523 06:31:51.546295       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-f8l9c-mprzc\" objectUID=977d2abd-9376-49b3-ab64-1ae7c0d7e811 kind=\"EndpointSlice\"\nE0523 06:31:51.596181       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-978jj-mtsh5\", UID:\"f94fc4f8-1b69-4396-bdc7-d53c04fb35bd\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-978jj\", UID:\"1ad7d7c7-ab57-482c-bfe9-146e2631a338\", Controller:(*bool)(0xc00311d19a), BlockOwnerDeletion:(*bool)(0xc00311d19b)}}}: endpointslices.discovery.k8s.io \"latency-svc-978jj-mtsh5\" not found\nI0523 06:31:51.596223       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-f9krx-h89x7\" objectUID=7e15490a-c596-40ea-9c82-220f94af9f32 kind=\"EndpointSlice\"\nE0523 06:31:51.605370       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nE0523 06:31:51.605424       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nE0523 06:31:51.646288       1 garbagecollector.go:309] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1beta1\", Kind:\"EndpointSlice\", Name:\"latency-svc-97k49-rt9ss\", UID:\"46ee110b-b923-4db2-937d-c4fedce13e3e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"svc-latency-3162\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Service\", Name:\"latency-svc-97k49\", UID:\"2dbbcf8c-97ac-4f76-bad4-d01ce852dba3\", Controller:(*bool)(0xc001cfc0da), BlockOwnerDeletion:(*bool)(0xc001cfc0db)}}}: endpointslices.discovery.k8s.io \"latency-svc-97k49-rt9ss\" not found\nI0523 06:31:51.646342       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-fb8n5-njflx\" objectUID=b9e6630a-f7f3-400e-9368-b64b1deebacb kind=\"EndpointSlice\"\nI0523 06:31:51.696982       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-fqhsl-zlkgx\" objectUID=44bd52b5-93af-4e6c-b0f9-020b5b0854c3 kind=\"EndpointSlice\"\nI0523 06:31:51.746582       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-fvjv2-n2npc\" objectUID=5acbc02d-f7ab-4650-a189-ff455ca857fb kind=\"EndpointSlice\"\nE0523 06:31:51.766680       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nE0523 06:31:51.783782       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:51.797543       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-fzltp-sx4kb\" objectUID=5bdb9716-6fa4-452b-a525-93ea898b8364 kind=\"EndpointSlice\"\nI0523 06:31:51.845672       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-fzzk4-tqg5m\" objectUID=9f41f896-07c6-4f7a-af6f-ca7db4014fc8 kind=\"EndpointSlice\"\nI0523 06:31:51.897157       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-gclcv-swvvf\" objectUID=4245df54-cb60-4683-a1b4-a82813f408a1 kind=\"EndpointSlice\"\nE0523 06:31:51.903215       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:51.945634       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-gdr97-k6sq5\" objectUID=462583e8-a8cc-4649-848b-7f8371616eb4 kind=\"EndpointSlice\"\nI0523 06:31:51.996652       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-gh8lz-87llm\" objectUID=74f6af7b-eedb-4c48-8b3b-0910ed8e7c6c kind=\"EndpointSlice\"\nE0523 06:31:51.998696       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:52.048869       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-gjbbl-jnmx6\" objectUID=ab65398b-d7e4-4bdb-9dfa-0313302e83db kind=\"EndpointSlice\"\nE0523 06:31:52.082583       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:52.095330       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-gkpcp-wwszj\" objectUID=46a7e7bd-24bc-421b-aeac-d95ff746feaf kind=\"EndpointSlice\"\nI0523 06:31:52.145366       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-h85b7-b5f78\" objectUID=5de664bf-1153-4249-8eae-c2bbcdc2b3e8 kind=\"EndpointSlice\"\nI0523 06:31:52.197970       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-hbzcb-7jswl\" objectUID=9abef3bf-6eea-4d47-a69d-5ace5d001154 kind=\"EndpointSlice\"\nI0523 06:31:52.247844       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-hc7mq-h5nbx\" objectUID=e27dc294-4f4b-49ca-8cb1-d622545d2ab4 kind=\"EndpointSlice\"\nI0523 06:31:52.299610       1 request.go:645] Throttling request took 1.001367559s, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1beta1/namespaces/svc-latency-3162/endpointslices/latency-svc-dwb92-njpsh\nI0523 06:31:52.308771       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-hct8x-sjpfj\" objectUID=2b2a6a7d-bffd-459d-ae11-679d43c173e4 kind=\"EndpointSlice\"\nE0523 06:31:52.308907       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:52.321224       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=4 deleting=1\nI0523 06:31:52.321255       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7]\nI0523 06:31:52.321307       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-1084/webserver-dd94f59b7-qjnqw\"\nI0523 06:31:52.323002       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 4\"\nI0523 06:31:52.332366       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-8744fbf59\" need=1 creating=1\nI0523 06:31:52.332929       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-8744fbf59 to 1\"\nI0523 06:31:52.339547       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-8744fbf59\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8744fbf59-bzx2v\"\nI0523 06:31:52.359674       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-qjnqw\"\nI0523 06:31:52.360476       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-hd97v-2bkgf\" objectUID=60054f2e-6f49-4ee3-90bf-2334be2799a2 kind=\"EndpointSlice\"\nI0523 06:31:52.360718       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 3\"\nI0523 06:31:52.366642       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-8744fbf59\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:52.371393       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-8744fbf59\" need=2 creating=1\nI0523 06:31:52.371849       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-8744fbf59 to 2\"\nI0523 06:31:52.375371       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=3 deleting=1\nI0523 06:31:52.375395       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" relatedReplicaSets=[webserver-dd94f59b7 webserver-8744fbf59]\nI0523 06:31:52.375447       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-dd94f59b7\" pod=\"deployment-1084/webserver-dd94f59b7-5vmjk\"\nI0523 06:31:52.380621       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-8744fbf59\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8744fbf59-dg894\"\nE0523 06:31:52.397034       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:52.397075       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-5vmjk\"\nI0523 06:31:52.404269       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-j298t-czjkc\" objectUID=80e877b9-1d42-4f84-9a9a-6dd6f30989c6 kind=\"EndpointSlice\"\nI0523 06:31:52.414325       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:52.445373       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-jg7d8-vrxfr\" objectUID=ee23b7e8-6fed-4eb6-b58c-46f7207c4542 kind=\"EndpointSlice\"\nI0523 06:31:52.489921       1 namespace_controller.go:185] Namespace has been deleted volume-9388\nI0523 06:31:52.496328       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-jjrcd-6jzl6\" objectUID=32f243c0-427d-447e-a1d6-0d5481d57b14 kind=\"EndpointSlice\"\nI0523 06:31:52.516199       1 namespace_controller.go:185] Namespace has been deleted deployment-5799\nI0523 06:31:52.545340       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-jlls2-cbqq2\" objectUID=589ffc31-cdf5-4d3c-9a4d-2e667ba2485b kind=\"EndpointSlice\"\nI0523 06:31:52.595543       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-jmdhm-562kw\" objectUID=7e6eb6af-56c9-411b-92d0-3fefc6c053bb kind=\"EndpointSlice\"\nI0523 06:31:52.645911       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-jrlzj-9ltdd\" objectUID=56ef21b5-30b1-4627-ad01-8830a92d1cdf kind=\"EndpointSlice\"\nE0523 06:31:52.686157       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:52.704189       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-jrq2f-bq779\" objectUID=d07e956a-ef56-485d-b0da-2bf40af76671 kind=\"EndpointSlice\"\nI0523 06:31:52.747150       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-jv4pr-nrsxs\" objectUID=37e35db6-f298-42ab-8863-9ad8e74473ea kind=\"EndpointSlice\"\nI0523 06:31:52.796980       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-k76b6-dfgns\" objectUID=2604ad03-2c7a-4750-aa83-c483479e56ba kind=\"EndpointSlice\"\nE0523 06:31:52.797611       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:52.849418       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-kf5d6-4z77c\" objectUID=68cc4635-69a5-4d21-8399-e455869ad156 kind=\"EndpointSlice\"\nE0523 06:31:52.849681       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-8562/default: secrets \"default-token-m9z7c\" is forbidden: unable to create new content in namespace provisioning-8562 because it is being terminated\nI0523 06:31:52.895340       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-kw9bm-kdffk\" objectUID=1bd03661-2f36-438f-9a87-a1814dd6f621 kind=\"EndpointSlice\"\nI0523 06:31:52.945367       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-l2d62-c48nx\" objectUID=45c353e0-4f50-4dfd-a184-f93823006d01 kind=\"EndpointSlice\"\nI0523 06:31:52.996333       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-l4j2w-flnmt\" objectUID=9ddf84c0-4a00-4641-a997-ba6d96628efd kind=\"EndpointSlice\"\nI0523 06:31:53.045874       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-l57pc-6hc9m\" objectUID=303f2059-004f-4b03-b331-e5b4da7fbe05 kind=\"EndpointSlice\"\nI0523 06:31:53.095989       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-l6227-lpwd8\" objectUID=1e6cd89e-fcc3-46da-abd2-c45830df45f1 kind=\"EndpointSlice\"\nE0523 06:31:53.100651       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:53.145508       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-lb79m-nn6sl\" objectUID=b852350d-5f69-486f-9dad-0725388e28e1 kind=\"EndpointSlice\"\nI0523 06:31:53.195341       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-lbbvj-h9ndj\" objectUID=519d21ef-bb18-4cd0-a5e6-85ac370fb471 kind=\"EndpointSlice\"\nI0523 06:31:53.245472       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-lbgcp-fflcw\" objectUID=f7dc431a-5692-41d9-ab48-31c1df691823 kind=\"EndpointSlice\"\nI0523 06:31:53.295325       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-lgct4-dg8df\" objectUID=7c90d325-6b44-41da-90de-5741e294442f kind=\"EndpointSlice\"\nI0523 06:31:53.345313       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-lhfjd-d7mh6\" objectUID=f861fb0f-89b0-4c75-961e-f3bde58101b4 kind=\"EndpointSlice\"\nI0523 06:31:53.397367       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-lq9ts-588tq\" objectUID=41c5a1bc-542a-4c09-b9ec-091ae6b0c17f kind=\"EndpointSlice\"\nI0523 06:31:53.445599       1 request.go:645] Throttling request took 1.000156716s, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1beta1/namespaces/svc-latency-3162/endpointslices/latency-svc-jg7d8-vrxfr\nI0523 06:31:53.457679       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-m47jl-z8fn9\" objectUID=09adc9b4-a8b1-4a85-b1de-beccf9e85755 kind=\"EndpointSlice\"\nI0523 06:31:53.495939       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-m9php-g7vbn\" objectUID=b56d6415-2c87-48cb-8330-b5119fa55384 kind=\"EndpointSlice\"\nI0523 06:31:53.517469       1 namespace_controller.go:185] Namespace has been deleted services-9797\nI0523 06:31:53.538685       1 namespace_controller.go:185] Namespace has been deleted job-2400\nE0523 06:31:53.539898       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:53.545323       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-mc9rx-pt48q\" objectUID=6f1a5d88-988d-4c2d-806e-ef7db6b029b9 kind=\"EndpointSlice\"\nI0523 06:31:53.595798       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-mcjxm-wqf7b\" objectUID=cead8bcc-ad45-4acb-8376-0689c1ebbfa8 kind=\"EndpointSlice\"\nI0523 06:31:53.645969       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-mdpq4-4g4vt\" objectUID=fddf7caa-d66c-4725-97d4-d86987ec14f1 kind=\"EndpointSlice\"\nI0523 06:31:53.697304       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-mgl5v-f6jtj\" objectUID=416b8399-07bc-4e53-994d-1a8303d8e968 kind=\"EndpointSlice\"\nE0523 06:31:53.697458       1 tokens_controller.go:261] error synchronizing serviceaccount security-context-test-3775/default: serviceaccounts \"default\" not found\nI0523 06:31:53.751089       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-mrfkb-nv2l6\" objectUID=321fb3fc-1437-4d99-a999-b32a5a287a18 kind=\"EndpointSlice\"\nI0523 06:31:53.800220       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-mrfkl-dvvbn\" objectUID=ba471f33-6959-4d98-95e1-bcb6e6b4cb61 kind=\"EndpointSlice\"\nI0523 06:31:53.845936       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-mwnhf-lg545\" objectUID=664c61a6-d27a-4938-80ad-d1f199fbc6cc kind=\"EndpointSlice\"\nE0523 06:31:53.849937       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:53.895417       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-n2cr4-928wz\" objectUID=81aec11b-1de9-467d-a73c-bffbede005c3 kind=\"EndpointSlice\"\nI0523 06:31:53.903326       1 namespace_controller.go:185] Namespace has been deleted volume-882\nI0523 06:31:53.948905       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-n5pvn-t7dgr\" objectUID=e9675e91-6e0b-47db-b122-38c4d7aa5070 kind=\"EndpointSlice\"\nI0523 06:31:53.995429       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-nbwgk-qrd87\" objectUID=6957ff6b-b268-4548-bbdb-3a138592748c kind=\"EndpointSlice\"\nI0523 06:31:54.045427       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-nlszd-7cqnv\" objectUID=b8ed235b-05f2-4c87-b16e-77855aea2ee7 kind=\"EndpointSlice\"\nI0523 06:31:54.056623       1 namespace_controller.go:185] Namespace has been deleted kubectl-5140\nI0523 06:31:54.095378       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-np9kn-szdnm\" objectUID=81331ea6-c1b5-4e24-886d-3129c4b2339a kind=\"EndpointSlice\"\nI0523 06:31:54.145381       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-nrp82-gvs6c\" objectUID=5db3765e-f78a-46ed-b65c-9468148a779d kind=\"EndpointSlice\"\nI0523 06:31:54.195413       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-nsn2z-rqtxl\" objectUID=b0a685af-6580-44f6-a48e-71f72baa1f1a kind=\"EndpointSlice\"\nI0523 06:31:54.245371       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-nspgc-474jl\" objectUID=1d36b23d-7ef4-4362-8b22-be514ddf24d6 kind=\"EndpointSlice\"\nI0523 06:31:54.314784       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-nt6ks-pp9df\" objectUID=bc5f45dd-a14f-48ca-bcce-21aff7db770b kind=\"EndpointSlice\"\nI0523 06:31:54.355690       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-nzk9w-qhj2z\" objectUID=2d31753d-c15d-41f7-b928-468770dc2b60 kind=\"EndpointSlice\"\nI0523 06:31:54.422409       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-p46jq-z6jb2\" objectUID=d599e694-5646-458a-a45b-37bd9693f549 kind=\"EndpointSlice\"\nI0523 06:31:54.458967       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-p6nzv-9gwkw\" objectUID=b10ef98e-53b1-43ef-ad18-4759726efd35 kind=\"EndpointSlice\"\nI0523 06:31:54.496184       1 request.go:645] Throttling request took 1.000170533s, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1beta1/namespaces/svc-latency-3162/endpointslices/latency-svc-m9php-g7vbn\nI0523 06:31:54.511142       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=4 creating=1\nI0523 06:31:54.512949       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 4\"\nI0523 06:31:54.513501       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-p86wp-mxvpm\" objectUID=7f7ce0c9-76ba-4ad9-8287-623c4767f717 kind=\"EndpointSlice\"\nI0523 06:31:54.518353       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-wngtn\"\nI0523 06:31:54.538762       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-8744fbf59 to 3\"\nI0523 06:31:54.539015       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-8744fbf59\" need=3 creating=1\nI0523 06:31:54.550138       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-8744fbf59\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-8744fbf59-k49xz\"\nI0523 06:31:54.554427       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-p9qn4-h7plx\" objectUID=9fa672d4-88c7-45db-a918-6b6b50c4a2a2 kind=\"EndpointSlice\"\nI0523 06:31:54.603381       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-pdb9b-vjw97\" objectUID=73f5c18a-2439-4705-b807-60de55ddc866 kind=\"EndpointSlice\"\nI0523 06:31:54.620642       1 event.go:291] \"Event occurred\" object=\"statefulset-7578/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0523 06:31:54.645593       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-phgvl-tplq8\" objectUID=9aabadae-56f3-4727-a695-41c918f7ccc3 kind=\"EndpointSlice\"\nI0523 06:31:54.695531       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-pk94x-bcz5l\" objectUID=34405251-730f-4ec4-a71f-5901cc93d3a6 kind=\"EndpointSlice\"\nI0523 06:31:54.745382       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-pp7c5-b7hdd\" objectUID=063f1916-2cae-4ede-ab38-c5555484ea41 kind=\"EndpointSlice\"\nI0523 06:31:54.795860       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-ppc62-dkz45\" objectUID=8f957109-7734-4c24-bb23-45604cba6a55 kind=\"EndpointSlice\"\nI0523 06:31:54.823734       1 graph_builder.go:510] add [v1/Pod, namespace: ephemeral-3638, name: inline-volume-tester2-sb5nj, uid: 324e0e64-7e1e-4699-8d4b-a8b30c982763] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0523 06:31:54.846603       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-pt9mc-nb59p\" objectUID=414f270e-7716-4a7a-bfd2-12472ea28527 kind=\"EndpointSlice\"\nI0523 06:31:54.895710       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-q42v8-dp7lv\" objectUID=e30a0a3d-17c1-4b21-b2be-f1e695f9c815 kind=\"EndpointSlice\"\nE0523 06:31:54.915369       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:54.947855       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-q8dwh-v42wp\" objectUID=d13b19e8-abfc-4e85-b3ca-7c02ce7bfb81 kind=\"EndpointSlice\"\nI0523 06:31:54.995445       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-q8pmc-8tcwg\" objectUID=c8d7aeb9-5283-4df6-895b-6479aad3dc71 kind=\"EndpointSlice\"\nI0523 06:31:55.016657       1 event.go:291] \"Event occurred\" object=\"statefulset-3948/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0523 06:31:55.050657       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-qk7gv-t6xbh\" objectUID=5699a9ca-cd79-4763-9f12-48b6082a9e4d kind=\"EndpointSlice\"\nI0523 06:31:55.095565       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-qlrk8-mst8j\" objectUID=bc6dc8d1-b404-4e90-8f28-05edc5e918fb kind=\"EndpointSlice\"\nI0523 06:31:55.145504       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-qnjfx-4lb2h\" objectUID=d444c4c7-ce99-4016-bf4e-be4fdb66627e kind=\"EndpointSlice\"\nI0523 06:31:55.195922       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-qsbnq-ttm9l\" objectUID=f5be7245-49fd-4698-af76-d404957d9e14 kind=\"EndpointSlice\"\nE0523 06:31:55.216710       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:55.245381       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-qspjr-fpwjf\" objectUID=f2f9eb07-4608-4e29-bd19-99c1a233a91f kind=\"EndpointSlice\"\nI0523 06:31:55.295508       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-qvsd6-rvc7g\" objectUID=26a69057-23b0-4240-b5f5-fc41f4b2e340 kind=\"EndpointSlice\"\nI0523 06:31:55.345478       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-qwv5d-89db7\" objectUID=ab8133a8-923f-4b23-8313-ecfdcdf6c9cf kind=\"EndpointSlice\"\nI0523 06:31:55.395416       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-r2m27-lvdlm\" objectUID=94456015-7f92-4f73-bbec-330ca51f2ed6 kind=\"EndpointSlice\"\nI0523 06:31:55.445304       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-r49c6-hb8wc\" objectUID=0ce205f3-ca5c-42c5-95c8-138a2d052256 kind=\"EndpointSlice\"\nI0523 06:31:55.495530       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-r8hnx-wn7qp\" objectUID=3d8d8831-9bb5-48ee-8354-3c7e3edf665f kind=\"EndpointSlice\"\nI0523 06:31:55.506828       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"services-6248/slow-terminating-unready-pod\" need=0 deleting=1\nE0523 06:31:55.507036       1 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{slow-terminating-unready-pod  services-6248 /api/v1/namespaces/services-6248/replicationcontrollers/slow-terminating-unready-pod 271ceea1-1db5-4125-9038-e643e7cb4600 13888 2 2021-05-23 06:31:42 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-81ab5a90-9c2d-4a71-83e0-ba903e078ec6] map[] [] []  [{e2e.test Update v1 2021-05-23 06:31:42 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\"f:replicas\":{},\"f:selector\":{\".\":{},\"f:name\":{}},\"f:template\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{},\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\".\":{},\"f:containers\":{\".\":{},\"k:{\\\"name\\\":\\\"slow-terminating-unready-pod\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:lifecycle\":{\".\":{},\"f:preStop\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}}}},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:protocol\":{}}},\"f:readinessProbe\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}},\"f:failureThreshold\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}}} {kube-controller-manager Update v1 2021-05-23 06:31:42 +0000 UTC FieldsV1 {\"f:status\":{\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:replicas\":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: slow-terminating-unready-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-81ab5a90-9c2d-4a71-83e0-ba903e078ec6] map[] [] []  []} {[] [] [{slow-terminating-unready-pod k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [netexec --http-port=80]  [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/false],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil &Lifecycle{PostStart:nil,PreStop:&Handler{Exec:&ExecAction{Command:[/bin/sleep 600],},HTTPGet:nil,TCPSocket:nil,},} /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0027dad98 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}\nI0523 06:31:55.507080       1 controller_utils.go:604] \"Deleting pod\" controller=\"slow-terminating-unready-pod\" pod=\"services-6248/slow-terminating-unready-pod-67g6s\"\nI0523 06:31:55.510965       1 event.go:291] \"Event occurred\" object=\"services-6248/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: slow-terminating-unready-pod-67g6s\"\nI0523 06:31:55.545369       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-rdhdv-lnnbz\" objectUID=7bbb6f14-d61a-40e3-a16e-e1fb0ff5c189 kind=\"EndpointSlice\"\nI0523 06:31:55.598135       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-rkw95-7bsdp\" objectUID=b7709be6-938c-47c0-9433-7988967af1a1 kind=\"EndpointSlice\"\nI0523 06:31:55.598368       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 1\"\nI0523 06:31:55.604348       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-dd94f59b7\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:55.611125       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:55.645431       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-rkzd2-cjhv5\" objectUID=d41611e6-f23c-4d69-ac11-e541bb0c86bd kind=\"EndpointSlice\"\nI0523 06:31:55.695327       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-rln46-7sld5\" objectUID=d33ddce0-5409-4d8f-ab67-2a84b066e7bf kind=\"EndpointSlice\"\nI0523 06:31:55.745475       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-rng6x-h4n9t\" objectUID=ff0e20a7-759f-4e5b-b64d-5056755b13ea kind=\"EndpointSlice\"\nE0523 06:31:55.793782       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0523 06:31:55.795357       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-rt8ld-47w2j\" objectUID=f33a113a-5c73-4823-9182-16ca2a3bbf6d kind=\"EndpointSlice\"\nI0523 06:31:55.845300       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-s49f4-bblwj\" objectUID=194249f9-388c-41f7-b4c0-d43e3d4677ab kind=\"EndpointSlice\"\nI0523 06:31:55.899054       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-s6blm-r9445\" objectUID=da8ce040-6b46-46d6-ad8e-695aa5fed4fc kind=\"EndpointSlice\"\nI0523 06:31:55.945398       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-s87f4-wcv4t\" objectUID=ad9b984c-b56f-479a-b5a0-045c09f4f60b kind=\"EndpointSlice\"\nI0523 06:31:55.995366       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-sd728-zcvp2\" objectUID=4a26c632-dabc-4479-badd-bf4db2dfdc7d kind=\"EndpointSlice\"\nI0523 06:31:55.997675       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-5695\nI0523 06:31:56.045359       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-slfzx-5wxcx\" objectUID=00b9a017-bcfa-44cd-b652-3070e382a25a kind=\"EndpointSlice\"\nI0523 06:31:56.095364       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-sm2dv-2kn7x\" objectUID=a976b801-8d83-413d-a630-ebb7636fe823 kind=\"EndpointSlice\"\nI0523 06:31:56.145425       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-spndh-5k4tc\" objectUID=c0eafbb4-eb43-4f36-b48d-9231fda331f3 kind=\"EndpointSlice\"\nI0523 06:31:56.195381       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-swnwx-fjgfk\" objectUID=05811e73-d34f-456e-8520-dd7f3089768f kind=\"EndpointSlice\"\nI0523 06:31:56.245343       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-t4qvk-4xjsd\" objectUID=c796f51e-3d86-4fbc-a700-6e0ae2c7ed46 kind=\"EndpointSlice\"\nI0523 06:31:56.295382       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-t5whv-4pz5j\" objectUID=1d479cb1-77a7-41fd-ba3c-d08265d020cb kind=\"EndpointSlice\"\nI0523 06:31:56.345390       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-t9r6p-4dhkn\" objectUID=f5d78221-d4f6-422a-b0b6-24282d9257f0 kind=\"EndpointSlice\"\nI0523 06:31:56.395452       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-tfk9v-pm5fd\" objectUID=09b5d5ea-ef8f-4cc7-b0ae-2ef946173081 kind=\"EndpointSlice\"\nI0523 06:31:56.445832       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-tkqlk-vzx6g\" objectUID=7758d404-2fda-4f7b-87d8-87f8fa1daf77 kind=\"EndpointSlice\"\nI0523 06:31:56.496055       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-v2rfw-qc7x4\" objectUID=6403869c-c83e-4cf7-9e31-067c355a1be7 kind=\"EndpointSlice\"\nI0523 06:31:56.545409       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-v4qhg-gzjvw\" objectUID=483eb12e-4a30-479a-bbda-43154588859a kind=\"EndpointSlice\"\nI0523 06:31:56.595543       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-vktk7-4d69k\" objectUID=37b3ba25-cb8e-49c5-8e96-3001488d43ad kind=\"EndpointSlice\"\nI0523 06:31:56.645376       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-vld4q-wqgql\" objectUID=b35c7969-5fc0-439e-ba52-f44520f55e1e kind=\"EndpointSlice\"\nI0523 06:31:56.695562       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-vsxbn-hr5m8\" objectUID=e0f33a3f-d531-4de6-a627-29d7fedc7ab9 kind=\"EndpointSlice\"\nI0523 06:31:56.745368       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-w2zn7-zd5c5\" objectUID=5d608a00-c175-44b7-a9eb-00dd007b9284 kind=\"EndpointSlice\"\nI0523 06:31:56.796939       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-w7n24-c2r42\" objectUID=ae294e12-7d38-418c-8952-952174d457a1 kind=\"EndpointSlice\"\nI0523 06:31:56.845515       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-w7n6x-xhc92\" objectUID=b861ab16-1756-468d-ba03-06f24e7e99f3 kind=\"EndpointSlice\"\nI0523 06:31:56.856141       1 pv_controller.go:859] volume \"pvc-e743cbf3-15a3-4aa7-b2de-eb04374549c4\" entered phase \"Bound\"\nI0523 06:31:56.856169       1 pv_controller.go:962] volume \"pvc-e743cbf3-15a3-4aa7-b2de-eb04374549c4\" bound to claim \"volumemode-2019/csi-hostpath252pz\"\nI0523 06:31:56.865170       1 pv_controller.go:803] claim \"volumemode-2019/csi-hostpath252pz\" entered phase \"Bound\"\nI0523 06:31:56.901789       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-w8sql-9gkr9\" objectUID=3fdfd25f-b57d-4b8b-abc6-9e80f671b3cd kind=\"EndpointSlice\"\nI0523 06:31:56.947609       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-wplmc-25zsv\" objectUID=4ad289bc-e50d-4a41-9202-5ef4e91932f0 kind=\"EndpointSlice\"\nI0523 06:31:56.995389       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-wv29v-4h688\" objectUID=8de30917-f59e-4c45-82ad-8e0ed6a77f22 kind=\"EndpointSlice\"\nI0523 06:31:57.021364       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-3444/pvc-r7n8c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-3444\\\" or manually created by system administrator\"\nI0523 06:31:57.039348       1 pv_controller.go:859] volume \"pvc-d0d6a6cf-4391-439c-8404-00147a9454c5\" entered phase \"Bound\"\nI0523 06:31:57.039377       1 pv_controller.go:962] volume \"pvc-d0d6a6cf-4391-439c-8404-00147a9454c5\" bound to claim \"csi-mock-volumes-3444/pvc-r7n8c\"\nI0523 06:31:57.047762       1 pv_controller.go:803] claim \"csi-mock-volumes-3444/pvc-r7n8c\" entered phase \"Bound\"\nI0523 06:31:57.050831       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-wwj98-rhckc\" objectUID=30f5cf43-bacc-4491-b63d-6d99298b8865 kind=\"EndpointSlice\"\nI0523 06:31:57.095499       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-x47gt-s6vj7\" objectUID=8da9bf83-19ff-4e2a-a996-841b27eed2b7 kind=\"EndpointSlice\"\nI0523 06:31:57.145506       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-x8npc-4ppgp\" objectUID=ad72127a-f90f-4d00-a5e6-6c44f1e2bc93 kind=\"EndpointSlice\"\nI0523 06:31:57.174523       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-d0d6a6cf-4391-439c-8404-00147a9454c5\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3444^4\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:57.195825       1 request.go:645] Throttling request took 1.000351376s, request: GET:https://127.0.0.1/apis/discovery.k8s.io/v1beta1/namespaces/svc-latency-3162/endpointslices/latency-svc-swnwx-fjgfk\nI0523 06:31:57.202378       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xdn4m-49cmt\" objectUID=3565039f-ebbd-4032-82af-c15ca7b81128 kind=\"EndpointSlice\"\nI0523 06:31:57.221965       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-d0d6a6cf-4391-439c-8404-00147a9454c5\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3444^4\") from node \"ip-172-20-52-97.ca-central-1.compute.internal\" \nI0523 06:31:57.222228       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-3444/pvc-volume-tester-cntt4\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-d0d6a6cf-4391-439c-8404-00147a9454c5\\\" \"\nI0523 06:31:57.245393       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xfp9f-vpqpx\" objectUID=87df361a-1730-4b9d-9001-a580c7fcaacf kind=\"EndpointSlice\"\nI0523 06:31:57.295386       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xml7j-jswcf\" objectUID=a6a9f6db-869b-452f-a9b9-73063bbf4d41 kind=\"EndpointSlice\"\nI0523 06:31:57.346173       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xqm4r-9lqb7\" objectUID=4f33e6ce-c4dc-4171-afba-d6c664823507 kind=\"EndpointSlice\"\nI0523 06:31:57.397535       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xtl7b-79bp7\" objectUID=b69934a2-4b45-4cce-b0ba-0616fa29234a kind=\"EndpointSlice\"\nI0523 06:31:57.445383       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xtnmv-jjnwc\" objectUID=2697a96f-c90b-4b62-a4b1-ec257bfbc2da kind=\"EndpointSlice\"\nI0523 06:31:57.495793       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xvdhl-6kxj9\" objectUID=aa28c68a-3b98-47a2-9c79-6c4151efa990 kind=\"EndpointSlice\"\nI0523 06:31:57.545760       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-xvsrp-rjt7g\" objectUID=0711bee3-20a8-40c3-9181-327135013ee2 kind=\"EndpointSlice\"\nE0523 06:31:57.564934       1 namespace_controller.go:162] deletion of namespace kubelet-test-6334 failed: unexpected items still remain in namespace: kubelet-test-6334 for gvr: /v1, Resource=pods\nI0523 06:31:57.597156       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-z7f2x-57wmp\" objectUID=c732eb09-db77-42ca-9742-759f133158b2 kind=\"EndpointSlice\"\nI0523 06:31:57.646125       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-z9ldf-mj9kz\" objectUID=d1ea2b19-2664-4327-9648-00a88992d246 kind=\"EndpointSlice\"\nI0523 06:31:57.682660       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e743cbf3-15a3-4aa7-b2de-eb04374549c4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-2019^924226cf-bb90-11eb-a4df-46f448dd6ea2\") from node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:57.689038       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-e743cbf3-15a3-4aa7-b2de-eb04374549c4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-2019^924226cf-bb90-11eb-a4df-46f448dd6ea2\") from node \"ip-172-20-36-181.ca-central-1.compute.internal\" \nI0523 06:31:57.689111       1 event.go:291] \"Event occurred\" object=\"volumemode-2019/pod-18ed5b2e-9d8b-429b-8fe8-354131a8abab\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e743cbf3-15a3-4aa7-b2de-eb04374549c4\\\" \"\nI0523 06:31:57.698210       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-zdhx4-fp2zj\" objectUID=eae1c0eb-f9d4-4dbb-8493-540b62362105 kind=\"EndpointSlice\"\nI0523 06:31:57.749014       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-zq8ln-t8vn7\" objectUID=1defb00d-cefe-4f40-9fc4-0ace91190864 kind=\"EndpointSlice\"\nI0523 06:31:57.819319       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-zrzbn-k8ldn\" objectUID=662831fc-2e04-498e-b561-37ec114b6fd9 kind=\"EndpointSlice\"\nI0523 06:31:57.846221       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8db9r-kp8mr\" objectUID=605dc2ab-54d5-4060-a61d-076e8bbab6ce kind=\"EndpointSlice\"\nI0523 06:31:57.880823       1 namespace_controller.go:185] Namespace has been deleted provisioning-8562\nI0523 06:31:57.895933       1 garbagecollector.go:404] \"Processing object\" object=\"persistent-local-volumes-test-5695/pod-675cd175-85a1-40d3-b771-09040376c4b1\" objectUID=e0654cf6-1803-4c62-9ea8-5e792bd7ef51 kind=\"CiliumEndpoint\"\nE0523 06:31:57.906198       1 namespace_controller.go:162] deletion of namespace svc-latency-3162 failed: unexpected items still remain in namespace: svc-latency-3162 for gvr: /v1, Resource=pods\nI0523 06:31:57.945338       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8dlxl-mvnc6\" objectUID=238cbf06-92e8-4e1d-bf24-068648ee98e9 kind=\"EndpointSlice\"\nI0523 06:31:57.995372       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8f2zb-xm6s6\" objectUID=ecb70c41-7edc-4162-bbba-5300b6d655d7 kind=\"EndpointSlice\"\nI0523 06:31:58.045396       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8mbfh-h5p6z\" objectUID=e753297f-c980-43bf-983f-46d26fae41ac kind=\"EndpointSlice\"\nI0523 06:31:58.089637       1 pv_controller.go:859] volume \"local-pvgjg9h\" entered phase \"Available\"\nI0523 06:31:58.095403       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8r87v-lzbq8\" objectUID=2ec2cb5b-348b-4a83-8daf-d3a346afafe8 kind=\"EndpointSlice\"\nI0523 06:31:58.124615       1 pv_controller.go:910] claim \"persistent-local-volumes-test-845/pvc-zdskt\" bound to volume \"local-pvgjg9h\"\nI0523 06:31:58.133439       1 pv_controller.go:859] volume \"local-pvgjg9h\" entered phase \"Bound\"\nI0523 06:31:58.133465       1 pv_controller.go:962] volume \"local-pvgjg9h\" bound to claim \"persistent-local-volumes-test-845/pvc-zdskt\"\nI0523 06:31:58.144310       1 pv_controller.go:803] claim \"persistent-local-volumes-test-845/pvc-zdskt\" entered phase \"Bound\"\nI0523 06:31:58.145416       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/svc-latency-rc-hx8gs\" objectUID=1420d39d-c723-41d0-b100-4de0dd440c1f kind=\"Pod\"\nI0523 06:31:58.145439       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-8v2bp-njxmb\" objectUID=31a65f2f-f344-4891-b4ce-43a035b98e72 kind=\"EndpointSlice\"\nI0523 06:31:58.172080       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-6585/test-rollover-controller\" need=1 creating=1\nI0523 06:31:58.176006       1 event.go:291] \"Event occurred\" object=\"deployment-6585/test-rollover-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rollover-controller-27lj7\"\nI0523 06:31:58.200858       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-978jj-mtsh5\" objectUID=f94fc4f8-1b69-4396-bdc7-d53c04fb35bd kind=\"EndpointSlice\"\nI0523 06:31:58.245373       1 garbagecollector.go:404] \"Processing object\" object=\"svc-latency-3162/latency-svc-97k49-rt9ss\" objectUID=46ee110b-b923-4db2-937d-c4fedce13e3e kind=\"EndpointSlice\"\nI0523 06:31:58.295333       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-1084/webserver-dd94f59b7-qjnqw\" objectUID=56d024b4-bd34-402e-8811-f457c4e49919 kind=\"CiliumEndpoint\"\nI0523 06:31:58.345667       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-1084/webserver-dd94f59b7-5vmjk\" objectUID=959f3639-8d01-449f-acb9-8e23af4194bd kind=\"CiliumEndpoint\"\nI0523 06:31:58.396313       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-3638/inline-volume-tester2-sb5nj\" objectUID=26ae83c4-f638-404a-b1b8-0176c918f21d kind=\"CiliumEndpoint\"\nI0523 06:31:58.445654       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-3638/inline-volume-tester2-sb5nj\" objectUID=324e0e64-7e1e-4699-8d4b-a8b30c982763 kind=\"Pod\"\nI0523 06:31:58.453919       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=5 creating=1\nI0523 06:31:58.454129       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 5\"\nI0523 06:31:58.459424       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-hlw2g\"\nI0523 06:31:58.501275       1 garbagecollector.go:404] \"Processing object\" object=\"services-432/externalname-service-lwf2f\" objectUID=a761dcc7-42ba-4de7-bee5-6a0dec126e18 kind=\"EndpointSlice\"\nI0523 06:31:58.734628       1 namespace_controller.go:185] Namespace has been deleted security-context-test-3775\nI0523 06:31:58.763539       1 garbagecollector.go:404] \"Processing object\" object=\"services-6248/slow-terminating-unready-pod-67g6s\" objectUID=04d5212b-6fc3-4d92-8e71-660855a84b63 kind=\"CiliumEndpoint\"\nI0523 06:31:58.906629       1 garbagecollector.go:404] \"Processing object\" object=\"services-6248/tolerate-unready-sbd95\" objectUID=04515d84-6835-4c07-8bef-946bdf2eb41e kind=\"EndpointSlice\"\nI0523 06:31:59.445419       1 garbagecollector.go:534] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-3638, name: inline-volume-tester2-sb5nj, uid: 26ae83c4-f638-404a-b1b8-0176c918f21d] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-3638, name: inline-volume-tester2-sb5nj, uid: 324e0e64-7e1e-4699-8d4b-a8b30c982763] is deletingDependents\nI0523 06:31:59.495509       1 garbagecollector.go:519] \"Deleting object\" object=\"services-432/externalname-service-lwf2f\" objectUID=a761dcc7-42ba-4de7-bee5-6a0dec126e18 kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:59.646639       1 garbagecollector.go:519] \"Deleting object\" object=\"services-6248/tolerate-unready-sbd95\" objectUID=04515d84-6835-4c07-8bef-946bdf2eb41e kind=\"EndpointSlice\" propagationPolicy=Background\nI0523 06:31:59.695445       1 garbagecollector.go:519] \"Deleting object\" object=\"ephemeral-3638/inline-volume-tester2-sb5nj\" objectUID=26ae83c4-f638-404a-b1b8-0176c918f21d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:59.815949       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-8744fbf59 to 2\"\nI0523 06:31:59.816120       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-1084/webserver-8744fbf59\" need=2 deleting=1\nI0523 06:31:59.816144       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-1084/webserver-8744fbf59\" relatedReplicaSets=[webserver-dd94f59b7 webserver-8744fbf59]\nI0523 06:31:59.816211       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-8744fbf59\" pod=\"deployment-1084/webserver-8744fbf59-k49xz\"\nI0523 06:31:59.825997       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-1084/webserver-dd94f59b7\" need=6 creating=1\nI0523 06:31:59.826973       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 6\"\nI0523 06:31:59.833704       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:31:59.834292       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-8kd5b\"\nI0523 06:31:59.846646       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-8744fbf59\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-8744fbf59-k49xz\"\nI0523 06:31:59.846901       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-1084/webserver-8744fbf59-k49xz\" objectUID=6da552d8-c273-497a-9897-a50fde4f638f kind=\"CiliumEndpoint\"\nI0523 06:31:59.899348       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-3638/inline-volume-tester2-sb5nj\" objectUID=324e0e64-7e1e-4699-8d4b-a8b30c982763 kind=\"Pod\"\nI0523 06:31:59.899452       1 garbagecollector.go:404] \"Processing object\" object=\"ephemeral-3638/inline-volume-tester2-sb5nj\" objectUID=26ae83c4-f638-404a-b1b8-0176c918f21d kind=\"CiliumEndpoint\"\nI0523 06:31:59.949388       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-1084/webserver-8744fbf59-k49xz\" objectUID=6da552d8-c273-497a-9897-a50fde4f638f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:31:59.995464       1 garbagecollector.go:529] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-3638, name: inline-volume-tester2-sb5nj, uid: 324e0e64-7e1e-4699-8d4b-a8b30c982763]\nI0523 06:32:00.206456       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-1084/webserver-8744fbf59\" need=1 deleting=1\nI0523 06:32:00.206509       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-1084/webserver-8744fbf59\" relatedReplicaSets=[webserver-dd94f59b7 webserver-8744fbf59]\nI0523 06:32:00.206570       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-8744fbf59\" pod=\"deployment-1084/webserver-8744fbf59-bzx2v\"\nI0523 06:32:00.206722       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-8744fbf59 to 1\"\nI0523 06:32:00.214938       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:32:00.224344       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-8744fbf59\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-8744fbf59-bzx2v\"\nI0523 06:32:00.224502       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-1084/webserver-8744fbf59-bzx2v\" objectUID=e05e10cf-b2c3-4abe-8828-cf3e403d8fe1 kind=\"CiliumEndpoint\"\nI0523 06:32:00.245864       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-1084/webserver-8744fbf59-bzx2v\" objectUID=e05e10cf-b2c3-4abe-8828-cf3e403d8fe1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:32:00.274943       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-1084/webserver-8744fbf59\" need=0 deleting=1\nI0523 06:32:00.274969       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-1084/webserver-8744fbf59\" relatedReplicaSets=[webserver-dd94f59b7 webserver-8744fbf59]\nI0523 06:32:00.275021       1 controller_utils.go:604] \"Deleting pod\" controller=\"webserver-8744fbf59\" pod=\"deployment-1084/webserver-8744fbf59-dg894\"\nI0523 06:32:00.275716       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-8744fbf59 to 0\"\nI0523 06:32:00.286228       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-1084/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0523 06:32:00.289227       1 garbagecollector.go:404] \"Processing object\" object=\"deployment-1084/webserver-8744fbf59-dg894\" objectUID=9a2eb838-6e4d-4ac7-a35a-02039cce0b35 kind=\"CiliumEndpoint\"\nI0523 06:32:00.290632       1 event.go:291] \"Event occurred\" object=\"deployment-1084/webserver-8744fbf59\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-8744fbf59-dg894\"\nI0523 06:32:00.345868       1 garbagecollector.go:519] \"Deleting object\" object=\"deployment-1084/webserver-8744fbf59-dg894\" objectUID=9a2eb838-6e4d-4ac7-a35a-02039cce0b35 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0523 06:32:01.710175       1 pvc_protection_controller.go:291] PVC provisioning-333/pvc-tmg9w is unused\nI0523 06:32:01.716872       1 pv_controller.go:633] volume \"local-66ksg\" is released and reclaim policy \"Retain\" will be executed\nI0523 06:32:01.720243       1 pv_controller.go:859] volume \"local-66ksg\" entered phase \"Released\"\nI0523 06:32:01.751018       1 event.go:291] \"Event occurred\" object=\"gc-4356/simpletest.dep