This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-02 13:26
Elapsed33m11s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0902 13:27:05.976528    4064 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0902 13:27:05.978307    4064 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.23.0-alpha.2+v1.23.0-alpha.1-46-gc7eb08c76f/linux/amd64/kops
I0902 13:27:06.755573    4064 up.go:43] Cleaning up any leaked resources from previous cluster
I0902 13:27:06.755618    4064 dumplogs.go:38] /logs/artifacts/458df2ec-0bf1-11ec-af6c-e6bbdd2c6991/kops toolbox dump --name e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0902 13:27:06.774841    4082 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0902 13:27:06.774954    4082 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-3c2263334e-b172d.test-cncf-aws.k8s.io" not found
W0902 13:27:07.275865    4064 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0902 13:27:07.275944    4064 down.go:48] /logs/artifacts/458df2ec-0bf1-11ec-af6c-e6bbdd2c6991/kops delete cluster --name e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --yes
I0902 13:27:07.295186    4092 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0902 13:27:07.295733    4092 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-3c2263334e-b172d.test-cncf-aws.k8s.io" not found
I0902 13:27:07.819192    4064 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/02 13:27:07 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0902 13:27:07.827036    4064 http.go:37] curl https://ip.jsb.workers.dev
I0902 13:27:07.915779    4064 up.go:144] /logs/artifacts/458df2ec-0bf1-11ec-af6c-e6bbdd2c6991/kops create cluster --name e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.0-alpha.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-arm64-server-20210825 --channel=alpha --networking=cilium --container-runtime=containerd --zones=eu-central-1a --node-size=m6g.large --master-size=m6g.large --admin-access 35.225.74.23/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48
I0902 13:27:07.935159    4102 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0902 13:27:07.935278    4102 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0902 13:27:07.965151    4102 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0902 13:27:08.496744    4102 new_cluster.go:1052]  Cloud Provider ID = aws
... skipping 31 lines ...

I0902 13:27:33.926771    4064 up.go:181] /logs/artifacts/458df2ec-0bf1-11ec-af6c-e6bbdd2c6991/kops validate cluster --name e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0902 13:27:33.949596    4121 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0902 13:27:33.949707    4121 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-3c2263334e-b172d.test-cncf-aws.k8s.io

W0902 13:27:35.274970    4121 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0902 13:27:45.307672    4121 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:27:55.339977    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:28:05.370810    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:28:15.418669    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:28:25.450172    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:28:35.482505    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:28:45.514920    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:28:55.564536    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:29:05.628962    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:29:15.663285    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:29:25.710264    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:29:35.744673    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:29:45.777437    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:29:55.874123    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:30:05.921932    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:30:15.965048    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:30:26.506581    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:30:36.557335    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:30:46.590555    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:30:56.644829    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:31:06.682460    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0902 13:31:16.715953    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 14 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-tc55p						system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-tc55p" is pending
Pod	kube-system/ebs-csi-controller-567864678-hjw86						system-cluster-critical pod "ebs-csi-controller-567864678-hjw86" is pending
Pod	kube-system/ebs-csi-node-2shlw								system-node-critical pod "ebs-csi-node-2shlw" is pending
Pod	kube-system/ebs-csi-node-jbb6w								system-node-critical pod "ebs-csi-node-jbb6w" is pending
Pod	kube-system/kube-controller-manager-ip-172-20-62-124.eu-central-1.compute.internal	system-cluster-critical pod "kube-controller-manager-ip-172-20-62-124.eu-central-1.compute.internal" is pending

Validation Failed
W0902 13:31:29.621157    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 22 lines ...
Pod	kube-system/ebs-csi-node-2shlw			system-node-critical pod "ebs-csi-node-2shlw" is pending
Pod	kube-system/ebs-csi-node-drnd2			system-node-critical pod "ebs-csi-node-drnd2" is pending
Pod	kube-system/ebs-csi-node-fdzxn			system-node-critical pod "ebs-csi-node-fdzxn" is pending
Pod	kube-system/ebs-csi-node-jbb6w			system-node-critical pod "ebs-csi-node-jbb6w" is pending
Pod	kube-system/ebs-csi-node-t6jzr			system-node-critical pod "ebs-csi-node-t6jzr" is pending

Validation Failed
W0902 13:31:41.574070    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 20 lines ...
Pod	kube-system/ebs-csi-node-2shlw			system-node-critical pod "ebs-csi-node-2shlw" is pending
Pod	kube-system/ebs-csi-node-drnd2			system-node-critical pod "ebs-csi-node-drnd2" is pending
Pod	kube-system/ebs-csi-node-fdzxn			system-node-critical pod "ebs-csi-node-fdzxn" is pending
Pod	kube-system/ebs-csi-node-jbb6w			system-node-critical pod "ebs-csi-node-jbb6w" is pending
Pod	kube-system/ebs-csi-node-t6jzr			system-node-critical pod "ebs-csi-node-t6jzr" is pending

Validation Failed
W0902 13:31:53.586664    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 18 lines ...
Pod	kube-system/ebs-csi-node-2shlw			system-node-critical pod "ebs-csi-node-2shlw" is pending
Pod	kube-system/ebs-csi-node-drnd2			system-node-critical pod "ebs-csi-node-drnd2" is pending
Pod	kube-system/ebs-csi-node-fdzxn			system-node-critical pod "ebs-csi-node-fdzxn" is pending
Pod	kube-system/ebs-csi-node-jbb6w			system-node-critical pod "ebs-csi-node-jbb6w" is pending
Pod	kube-system/ebs-csi-node-t6jzr			system-node-critical pod "ebs-csi-node-t6jzr" is pending

Validation Failed
W0902 13:32:05.510038    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 16 lines ...
Pod	kube-system/ebs-csi-node-2shlw			system-node-critical pod "ebs-csi-node-2shlw" is pending
Pod	kube-system/ebs-csi-node-drnd2			system-node-critical pod "ebs-csi-node-drnd2" is pending
Pod	kube-system/ebs-csi-node-fdzxn			system-node-critical pod "ebs-csi-node-fdzxn" is pending
Pod	kube-system/ebs-csi-node-jbb6w			system-node-critical pod "ebs-csi-node-jbb6w" is pending
Pod	kube-system/ebs-csi-node-t6jzr			system-node-critical pod "ebs-csi-node-t6jzr" is pending

Validation Failed
W0902 13:32:17.394138    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 12 lines ...
Pod	kube-system/ebs-csi-node-2shlw			system-node-critical pod "ebs-csi-node-2shlw" is pending
Pod	kube-system/ebs-csi-node-drnd2			system-node-critical pod "ebs-csi-node-drnd2" is pending
Pod	kube-system/ebs-csi-node-fdzxn			system-node-critical pod "ebs-csi-node-fdzxn" is pending
Pod	kube-system/ebs-csi-node-jbb6w			system-node-critical pod "ebs-csi-node-jbb6w" is pending
Pod	kube-system/ebs-csi-node-t6jzr			system-node-critical pod "ebs-csi-node-t6jzr" is pending

Validation Failed
W0902 13:32:29.320659    4121 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 637 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 175 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:01.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6748" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:01.472: INFO: Only supported for providers [openstack] (not aws)
... skipping 89 lines ...
STEP: Building a namespace api object, basename kubectl
W0902 13:35:00.264922    4885 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep  2 13:35:00.265: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
STEP: calling kubectl wait --for=delete
Sep  2 13:35:00.544: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5265 wait --for=delete pod/doesnotexist'
Sep  2 13:35:01.681: INFO: stderr: ""
Sep  2 13:35:01.681: INFO: stdout: ""
Sep  2 13:35:01.681: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5265 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:02.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5265" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":1,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:03.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7367" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:03.809: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
• [SLOW TEST:8.766 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-cb2f3ea0-9596-48ad-9485-43886607c70b
STEP: Creating secret with name secret-projected-all-test-volume-19217990-b066-4892-8cea-5b4a6e53a73d
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep  2 13:34:59.780: INFO: Waiting up to 5m0s for pod "projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58" in namespace "projected-2625" to be "Succeeded or Failed"
Sep  2 13:34:59.909: INFO: Pod "projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58": Phase="Pending", Reason="", readiness=false. Elapsed: 128.546148ms
Sep  2 13:35:02.019: INFO: Pod "projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238939749s
Sep  2 13:35:04.130: INFO: Pod "projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.349895892s
Sep  2 13:35:06.321: INFO: Pod "projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.540438161s
STEP: Saw pod success
Sep  2 13:35:06.321: INFO: Pod "projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58" satisfied condition "Succeeded or Failed"
Sep  2 13:35:06.434: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58 container projected-all-volume-test: <nil>
STEP: delete the pod
Sep  2 13:35:07.497: INFO: Waiting for pod projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58 to disappear
Sep  2 13:35:07.612: INFO: Pod projected-volume-ac0bd3df-1842-4798-8915-f4b158a3be58 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.217 seconds]
[sig-storage] Projected combined
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:07.959: INFO: Only supported for providers [azure] (not aws)
... skipping 235 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:11.407: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 134 lines ...
• [SLOW TEST:8.563 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:12.400: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
Sep  2 13:35:13.332: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.775 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":1,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:13.623: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:14.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-352" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:14.355: INFO: Only supported for providers [vsphere] (not aws)
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:997
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1042
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":1,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:14.816: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 220 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 111 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:17.230: INFO: Only supported for providers [openstack] (not aws)
... skipping 71 lines ...
• [SLOW TEST:19.166 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:17.945: INFO: Only supported for providers [openstack] (not aws)
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:26.353: INFO: Only supported for providers [azure] (not aws)
... skipping 102 lines ...
STEP: Deleting pod hostexec-ip-172-20-45-138.eu-central-1.compute.internal-qtcxs in namespace volumemode-5795
Sep  2 13:35:16.518: INFO: Deleting pod "pod-7ace8658-de84-4537-8367-9e2c80f01037" in namespace "volumemode-5795"
Sep  2 13:35:16.633: INFO: Wait up to 5m0s for pod "pod-7ace8658-de84-4537-8367-9e2c80f01037" to be fully deleted
STEP: Deleting pv and pvc
Sep  2 13:35:20.878: INFO: Deleting PersistentVolumeClaim "pvc-tlpvl"
Sep  2 13:35:20.988: INFO: Deleting PersistentVolume "aws-f6rs4"
Sep  2 13:35:21.341: INFO: Couldn't delete PD "aws://eu-central-1a/vol-08f70751aa6a726e9", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08f70751aa6a726e9 is currently attached to i-057e9cd15b72f26d1
	status code: 400, request id: 36cb4eff-86d9-451e-bc8f-2739ddf0f656
Sep  2 13:35:26.986: INFO: Successfully deleted PD "aws://eu-central-1a/vol-08f70751aa6a726e9".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:26.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-5795" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:27.216: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1360
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1377
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-9vr6
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:34:59.741: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9vr6" in namespace "subpath-1009" to be "Succeeded or Failed"
Sep  2 13:34:59.874: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Pending", Reason="", readiness=false. Elapsed: 132.268925ms
Sep  2 13:35:01.988: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246912523s
Sep  2 13:35:04.099: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 4.357669421s
Sep  2 13:35:06.210: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 6.468988489s
Sep  2 13:35:08.321: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 8.579716761s
Sep  2 13:35:10.433: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 10.691371476s
... skipping 4 lines ...
Sep  2 13:35:21.019: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 21.277635675s
Sep  2 13:35:23.138: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 23.396403243s
Sep  2 13:35:25.249: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 25.507352408s
Sep  2 13:35:27.381: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Running", Reason="", readiness=true. Elapsed: 27.639159447s
Sep  2 13:35:29.491: INFO: Pod "pod-subpath-test-configmap-9vr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.749788947s
STEP: Saw pod success
Sep  2 13:35:29.491: INFO: Pod "pod-subpath-test-configmap-9vr6" satisfied condition "Succeeded or Failed"
Sep  2 13:35:29.602: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-subpath-test-configmap-9vr6 container test-container-subpath-configmap-9vr6: <nil>
STEP: delete the pod
Sep  2 13:35:29.864: INFO: Waiting for pod pod-subpath-test-configmap-9vr6 to disappear
Sep  2 13:35:29.974: INFO: Pod pod-subpath-test-configmap-9vr6 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9vr6
Sep  2 13:35:29.974: INFO: Deleting pod "pod-subpath-test-configmap-9vr6" in namespace "subpath-1009"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:35:15.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep  2 13:35:15.884: INFO: Waiting up to 5m0s for pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6" in namespace "downward-api-5078" to be "Succeeded or Failed"
Sep  2 13:35:16.008: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Pending", Reason="", readiness=false. Elapsed: 124.108644ms
Sep  2 13:35:18.131: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247036945s
Sep  2 13:35:20.241: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357737919s
Sep  2 13:35:22.354: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469969185s
Sep  2 13:35:24.463: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.578998699s
Sep  2 13:35:26.571: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.687535018s
Sep  2 13:35:28.679: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.795043054s
Sep  2 13:35:30.808: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.923843406s
STEP: Saw pod success
Sep  2 13:35:30.808: INFO: Pod "downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6" satisfied condition "Succeeded or Failed"
Sep  2 13:35:30.915: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6 container dapi-container: <nil>
STEP: delete the pod
Sep  2 13:35:31.642: INFO: Waiting for pod downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6 to disappear
Sep  2 13:35:31.750: INFO: Pod downward-api-43a566e7-9fd0-475a-98b0-0d9549431af6 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.792 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:35:17.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-e5a4e700-c771-43b1-8787-d017a8a79bc6
STEP: Creating a pod to test consume configMaps
Sep  2 13:35:18.854: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed" in namespace "projected-3384" to be "Succeeded or Failed"
Sep  2 13:35:18.972: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed": Phase="Pending", Reason="", readiness=false. Elapsed: 118.402355ms
Sep  2 13:35:21.095: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241586241s
Sep  2 13:35:23.207: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353518083s
Sep  2 13:35:25.317: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463235798s
Sep  2 13:35:27.427: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.573166329s
Sep  2 13:35:29.539: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685154558s
Sep  2 13:35:31.650: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.795975911s
STEP: Saw pod success
Sep  2 13:35:31.650: INFO: Pod "pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed" satisfied condition "Succeeded or Failed"
Sep  2 13:35:31.759: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep  2 13:35:32.013: INFO: Waiting for pod pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed to disappear
Sep  2 13:35:32.143: INFO: Pod pod-projected-configmaps-50544dd2-aafc-4059-9aac-201304da50ed no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.414 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 23 lines ...
Sep  2 13:35:18.117: INFO: PersistentVolumeClaim pvc-t6jrq found but phase is Pending instead of Bound.
Sep  2 13:35:20.229: INFO: PersistentVolumeClaim pvc-t6jrq found and phase=Bound (14.929990643s)
Sep  2 13:35:20.229: INFO: Waiting up to 3m0s for PersistentVolume local-gb9sz to have phase Bound
Sep  2 13:35:20.339: INFO: PersistentVolume local-gb9sz found and phase=Bound (110.159617ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ldwb
STEP: Creating a pod to test subpath
Sep  2 13:35:20.671: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ldwb" in namespace "provisioning-4157" to be "Succeeded or Failed"
Sep  2 13:35:20.780: INFO: Pod "pod-subpath-test-preprovisionedpv-ldwb": Phase="Pending", Reason="", readiness=false. Elapsed: 109.789453ms
Sep  2 13:35:22.892: INFO: Pod "pod-subpath-test-preprovisionedpv-ldwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221350605s
Sep  2 13:35:25.003: INFO: Pod "pod-subpath-test-preprovisionedpv-ldwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332288285s
Sep  2 13:35:27.114: INFO: Pod "pod-subpath-test-preprovisionedpv-ldwb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44377472s
Sep  2 13:35:29.225: INFO: Pod "pod-subpath-test-preprovisionedpv-ldwb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554592568s
Sep  2 13:35:31.337: INFO: Pod "pod-subpath-test-preprovisionedpv-ldwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.665998421s
STEP: Saw pod success
Sep  2 13:35:31.337: INFO: Pod "pod-subpath-test-preprovisionedpv-ldwb" satisfied condition "Succeeded or Failed"
Sep  2 13:35:31.453: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-ldwb container test-container-subpath-preprovisionedpv-ldwb: <nil>
STEP: delete the pod
Sep  2 13:35:31.714: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ldwb to disappear
Sep  2 13:35:31.824: INFO: Pod pod-subpath-test-preprovisionedpv-ldwb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ldwb
Sep  2 13:35:31.825: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ldwb" in namespace "provisioning-4157"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:33.402: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 303 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
Sep  2 13:35:14.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Sep  2 13:35:15.394: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  2 13:35:15.624: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9791" in namespace "provisioning-9791" to be "Succeeded or Failed"
Sep  2 13:35:15.746: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Pending", Reason="", readiness=false. Elapsed: 122.121618ms
Sep  2 13:35:17.857: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233039479s
Sep  2 13:35:19.968: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344082094s
Sep  2 13:35:22.079: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454945873s
Sep  2 13:35:24.189: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56510861s
Sep  2 13:35:26.300: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.675685102s
STEP: Saw pod success
Sep  2 13:35:26.300: INFO: Pod "hostpath-symlink-prep-provisioning-9791" satisfied condition "Succeeded or Failed"
Sep  2 13:35:26.300: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9791" in namespace "provisioning-9791"
Sep  2 13:35:26.418: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9791" to be fully deleted
Sep  2 13:35:26.528: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-c47x
STEP: Creating a pod to test subpath
Sep  2 13:35:26.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-c47x" in namespace "provisioning-9791" to be "Succeeded or Failed"
Sep  2 13:35:26.757: INFO: Pod "pod-subpath-test-inlinevolume-c47x": Phase="Pending", Reason="", readiness=false. Elapsed: 114.879497ms
Sep  2 13:35:28.868: INFO: Pod "pod-subpath-test-inlinevolume-c47x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225315195s
Sep  2 13:35:30.978: INFO: Pod "pod-subpath-test-inlinevolume-c47x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335883471s
Sep  2 13:35:33.089: INFO: Pod "pod-subpath-test-inlinevolume-c47x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446623459s
STEP: Saw pod success
Sep  2 13:35:33.089: INFO: Pod "pod-subpath-test-inlinevolume-c47x" satisfied condition "Succeeded or Failed"
Sep  2 13:35:33.209: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-c47x container test-container-subpath-inlinevolume-c47x: <nil>
STEP: delete the pod
Sep  2 13:35:33.451: INFO: Waiting for pod pod-subpath-test-inlinevolume-c47x to disappear
Sep  2 13:35:33.561: INFO: Pod pod-subpath-test-inlinevolume-c47x no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-c47x
Sep  2 13:35:33.561: INFO: Deleting pod "pod-subpath-test-inlinevolume-c47x" in namespace "provisioning-9791"
STEP: Deleting pod
Sep  2 13:35:33.680: INFO: Deleting pod "pod-subpath-test-inlinevolume-c47x" in namespace "provisioning-9791"
Sep  2 13:35:33.909: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9791" in namespace "provisioning-9791" to be "Succeeded or Failed"
Sep  2 13:35:34.018: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Pending", Reason="", readiness=false. Elapsed: 109.104731ms
Sep  2 13:35:36.129: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21980013s
Sep  2 13:35:38.240: INFO: Pod "hostpath-symlink-prep-provisioning-9791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33049302s
STEP: Saw pod success
Sep  2 13:35:38.240: INFO: Pod "hostpath-symlink-prep-provisioning-9791" satisfied condition "Succeeded or Failed"
Sep  2 13:35:38.240: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9791" in namespace "provisioning-9791"
Sep  2 13:35:38.368: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9791" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:38.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9791" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:38.708: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":24,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:35:38.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Sep  2 13:35:38.679: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-2be6d65b-5a7c-48e8-964c-a73b326ecc36" in namespace "security-context-test-7794" to be "Succeeded or Failed"
Sep  2 13:35:38.817: INFO: Pod "busybox-readonly-true-2be6d65b-5a7c-48e8-964c-a73b326ecc36": Phase="Pending", Reason="", readiness=false. Elapsed: 137.973788ms
Sep  2 13:35:40.927: INFO: Pod "busybox-readonly-true-2be6d65b-5a7c-48e8-964c-a73b326ecc36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248246816s
Sep  2 13:35:43.044: INFO: Pod "busybox-readonly-true-2be6d65b-5a7c-48e8-964c-a73b326ecc36": Phase="Failed", Reason="", readiness=false. Elapsed: 4.364474688s
Sep  2 13:35:43.044: INFO: Pod "busybox-readonly-true-2be6d65b-5a7c-48e8-964c-a73b326ecc36" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:43.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7794" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should adopt matching orphans and release non-matching pods
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:167
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:35:40.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep  2 13:35:41.331: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-147877b2-8a48-469f-9ee0-4ce55e94d1dc" in namespace "security-context-test-8128" to be "Succeeded or Failed"
Sep  2 13:35:41.441: INFO: Pod "busybox-readonly-false-147877b2-8a48-469f-9ee0-4ce55e94d1dc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.885182ms
Sep  2 13:35:43.552: INFO: Pod "busybox-readonly-false-147877b2-8a48-469f-9ee0-4ce55e94d1dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220684424s
Sep  2 13:35:45.668: INFO: Pod "busybox-readonly-false-147877b2-8a48-469f-9ee0-4ce55e94d1dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336931628s
Sep  2 13:35:45.668: INFO: Pod "busybox-readonly-false-147877b2-8a48-469f-9ee0-4ce55e94d1dc" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:45.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8128" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:18.429 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:46.571: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep  2 13:35:27.929: INFO: Waiting up to 5m0s for pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2" in namespace "emptydir-7462" to be "Succeeded or Failed"
Sep  2 13:35:28.038: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 109.043615ms
Sep  2 13:35:30.147: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218218101s
Sep  2 13:35:32.257: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328217556s
Sep  2 13:35:34.367: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438126792s
Sep  2 13:35:36.480: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55082654s
Sep  2 13:35:38.590: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661291542s
Sep  2 13:35:40.700: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.771580939s
Sep  2 13:35:42.811: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 14.882049251s
Sep  2 13:35:44.927: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.99836765s
Sep  2 13:35:47.036: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.107745803s
STEP: Saw pod success
Sep  2 13:35:47.037: INFO: Pod "pod-596a6ab4-824a-43b1-81eb-afa75b3142a2" satisfied condition "Succeeded or Failed"
Sep  2 13:35:47.147: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-596a6ab4-824a-43b1-81eb-afa75b3142a2 container test-container: <nil>
STEP: delete the pod
Sep  2 13:35:47.376: INFO: Waiting for pod pod-596a6ab4-824a-43b1-81eb-afa75b3142a2 to disappear
Sep  2 13:35:47.490: INFO: Pod pod-596a6ab4-824a-43b1-81eb-afa75b3142a2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:47.754: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:47.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-7548" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:13.950 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":2,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:48.308: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 92 lines ...
Sep  2 13:35:17.664: INFO: PersistentVolumeClaim pvc-nvmgw found but phase is Pending instead of Bound.
Sep  2 13:35:19.773: INFO: PersistentVolumeClaim pvc-nvmgw found and phase=Bound (14.875034669s)
Sep  2 13:35:19.773: INFO: Waiting up to 3m0s for PersistentVolume local-vvlnn to have phase Bound
Sep  2 13:35:19.884: INFO: PersistentVolume local-vvlnn found and phase=Bound (110.637354ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fqd6
STEP: Creating a pod to test subpath
Sep  2 13:35:20.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fqd6" in namespace "provisioning-9061" to be "Succeeded or Failed"
Sep  2 13:35:20.322: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 109.049816ms
Sep  2 13:35:22.432: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218694686s
Sep  2 13:35:24.541: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327609666s
Sep  2 13:35:26.653: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439554097s
Sep  2 13:35:28.761: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548111554s
Sep  2 13:35:30.871: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657798181s
Sep  2 13:35:32.983: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.769371968s
Sep  2 13:35:35.093: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.879565176s
STEP: Saw pod success
Sep  2 13:35:35.093: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6" satisfied condition "Succeeded or Failed"
Sep  2 13:35:35.203: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fqd6 container test-container-subpath-preprovisionedpv-fqd6: <nil>
STEP: delete the pod
Sep  2 13:35:35.491: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fqd6 to disappear
Sep  2 13:35:35.601: INFO: Pod pod-subpath-test-preprovisionedpv-fqd6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fqd6
Sep  2 13:35:35.601: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fqd6" in namespace "provisioning-9061"
STEP: Creating pod pod-subpath-test-preprovisionedpv-fqd6
STEP: Creating a pod to test subpath
Sep  2 13:35:35.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fqd6" in namespace "provisioning-9061" to be "Succeeded or Failed"
Sep  2 13:35:35.964: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 120.13131ms
Sep  2 13:35:38.072: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228831889s
Sep  2 13:35:40.183: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33908962s
Sep  2 13:35:42.302: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.459048667s
Sep  2 13:35:44.411: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.568038602s
Sep  2 13:35:46.523: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.679152408s
STEP: Saw pod success
Sep  2 13:35:46.523: INFO: Pod "pod-subpath-test-preprovisionedpv-fqd6" satisfied condition "Succeeded or Failed"
Sep  2 13:35:46.650: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fqd6 container test-container-subpath-preprovisionedpv-fqd6: <nil>
STEP: delete the pod
Sep  2 13:35:46.881: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fqd6 to disappear
Sep  2 13:35:46.989: INFO: Pod pod-subpath-test-preprovisionedpv-fqd6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fqd6
Sep  2 13:35:46.989: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fqd6" in namespace "provisioning-9061"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:35:34.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Sep  2 13:35:39.415: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960" in namespace "security-context-test-895" to be "Succeeded or Failed"
Sep  2 13:35:39.524: INFO: Pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960": Phase="Pending", Reason="", readiness=false. Elapsed: 109.000872ms
Sep  2 13:35:41.635: INFO: Pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219720762s
Sep  2 13:35:43.747: INFO: Pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331657635s
Sep  2 13:35:45.857: INFO: Pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442013566s
Sep  2 13:35:47.969: INFO: Pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554070203s
Sep  2 13:35:50.081: INFO: Pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.665682706s
Sep  2 13:35:50.081: INFO: Pod "alpine-nnp-nil-8b725bb0-ce38-4f30-9fac-1e0480f03960" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:50.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-895" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:50.432: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Sep  2 13:35:48.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  2 13:35:49.235: INFO: Waiting up to 5m0s for pod "pod-093a7c17-78d9-4792-b083-5390835acd89" in namespace "emptydir-8783" to be "Succeeded or Failed"
Sep  2 13:35:49.344: INFO: Pod "pod-093a7c17-78d9-4792-b083-5390835acd89": Phase="Pending", Reason="", readiness=false. Elapsed: 108.628845ms
Sep  2 13:35:51.452: INFO: Pod "pod-093a7c17-78d9-4792-b083-5390835acd89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.217593005s
STEP: Saw pod success
Sep  2 13:35:51.453: INFO: Pod "pod-093a7c17-78d9-4792-b083-5390835acd89" satisfied condition "Succeeded or Failed"
Sep  2 13:35:51.562: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-093a7c17-78d9-4792-b083-5390835acd89 container test-container: <nil>
STEP: delete the pod
Sep  2 13:35:51.804: INFO: Waiting for pod pod-093a7c17-78d9-4792-b083-5390835acd89 to disappear
Sep  2 13:35:51.913: INFO: Pod pod-093a7c17-78d9-4792-b083-5390835acd89 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:51.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8783" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:52.149: INFO: Only supported for providers [openstack] (not aws)
... skipping 224 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:55.539: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:35:55.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7185" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":3,"skipped":25,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:55.678: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
Sep  2 13:35:48.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep  2 13:35:49.041: INFO: Waiting up to 5m0s for pod "pod-24fe4dd3-2579-4351-8917-9ef37fe7a037" in namespace "emptydir-6478" to be "Succeeded or Failed"
Sep  2 13:35:49.151: INFO: Pod "pod-24fe4dd3-2579-4351-8917-9ef37fe7a037": Phase="Pending", Reason="", readiness=false. Elapsed: 109.857211ms
Sep  2 13:35:51.263: INFO: Pod "pod-24fe4dd3-2579-4351-8917-9ef37fe7a037": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221254703s
Sep  2 13:35:53.389: INFO: Pod "pod-24fe4dd3-2579-4351-8917-9ef37fe7a037": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347715623s
Sep  2 13:35:55.500: INFO: Pod "pod-24fe4dd3-2579-4351-8917-9ef37fe7a037": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.459027608s
STEP: Saw pod success
Sep  2 13:35:55.500: INFO: Pod "pod-24fe4dd3-2579-4351-8917-9ef37fe7a037" satisfied condition "Succeeded or Failed"
Sep  2 13:35:55.611: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-24fe4dd3-2579-4351-8917-9ef37fe7a037 container test-container: <nil>
STEP: delete the pod
Sep  2 13:35:55.835: INFO: Waiting for pod pod-24fe4dd3-2579-4351-8917-9ef37fe7a037 to disappear
Sep  2 13:35:55.945: INFO: Pod pod-24fe4dd3-2579-4351-8917-9ef37fe7a037 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.791 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":54,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:35:50.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:35:57.583: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:997
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:998
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:00.194: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
• [SLOW TEST:24.844 seconds]
[sig-api-machinery] Servers with support for API chunking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":2,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:01.669: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Sep  2 13:35:56.852: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-6063" to be "Succeeded or Failed"
Sep  2 13:35:56.962: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 109.785742ms
Sep  2 13:35:59.074: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221860641s
Sep  2 13:36:01.203: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350428173s
Sep  2 13:36:03.321: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.468783757s
Sep  2 13:36:03.321: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:03.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6063" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":4,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:03.674: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 43 lines ...
Sep  2 13:35:32.933: INFO: PersistentVolumeClaim pvc-5g2kl found but phase is Pending instead of Bound.
Sep  2 13:35:35.044: INFO: PersistentVolumeClaim pvc-5g2kl found and phase=Bound (10.671914717s)
Sep  2 13:35:35.044: INFO: Waiting up to 3m0s for PersistentVolume local-t72ld to have phase Bound
Sep  2 13:35:35.167: INFO: PersistentVolume local-t72ld found and phase=Bound (122.699116ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mchq
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:35:35.498: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mchq" in namespace "provisioning-4991" to be "Succeeded or Failed"
Sep  2 13:35:35.609: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Pending", Reason="", readiness=false. Elapsed: 110.242089ms
Sep  2 13:35:37.729: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230535302s
Sep  2 13:35:39.839: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340743139s
Sep  2 13:35:41.952: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453199284s
Sep  2 13:35:44.062: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Running", Reason="", readiness=true. Elapsed: 8.563705351s
Sep  2 13:35:46.176: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Running", Reason="", readiness=true. Elapsed: 10.677308907s
... skipping 2 lines ...
Sep  2 13:35:52.525: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Running", Reason="", readiness=true. Elapsed: 17.026935904s
Sep  2 13:35:54.635: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Running", Reason="", readiness=true. Elapsed: 19.137149028s
Sep  2 13:35:56.748: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Running", Reason="", readiness=true. Elapsed: 21.249801473s
Sep  2 13:35:58.867: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Running", Reason="", readiness=true. Elapsed: 23.368924354s
Sep  2 13:36:00.978: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.480132054s
STEP: Saw pod success
Sep  2 13:36:00.979: INFO: Pod "pod-subpath-test-preprovisionedpv-mchq" satisfied condition "Succeeded or Failed"
Sep  2 13:36:01.088: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-mchq container test-container-subpath-preprovisionedpv-mchq: <nil>
STEP: delete the pod
Sep  2 13:36:01.408: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mchq to disappear
Sep  2 13:36:01.523: INFO: Pod pod-subpath-test-preprovisionedpv-mchq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mchq
Sep  2 13:36:01.523: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mchq" in namespace "provisioning-4991"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:03.820: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 82 lines ...
Sep  2 13:35:46.751: INFO: PersistentVolumeClaim pvc-8c7g2 found but phase is Pending instead of Bound.
Sep  2 13:35:48.860: INFO: PersistentVolumeClaim pvc-8c7g2 found and phase=Bound (6.438972874s)
Sep  2 13:35:48.860: INFO: Waiting up to 3m0s for PersistentVolume local-bkvls to have phase Bound
Sep  2 13:35:48.972: INFO: PersistentVolume local-bkvls found and phase=Bound (112.179794ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sv46
STEP: Creating a pod to test subpath
Sep  2 13:35:49.302: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sv46" in namespace "provisioning-2527" to be "Succeeded or Failed"
Sep  2 13:35:49.413: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46": Phase="Pending", Reason="", readiness=false. Elapsed: 111.110438ms
Sep  2 13:35:51.525: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222287329s
Sep  2 13:35:53.636: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333208599s
Sep  2 13:35:55.745: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442854751s
Sep  2 13:35:57.854: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552078663s
Sep  2 13:35:59.964: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661386018s
Sep  2 13:36:02.074: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.771927842s
STEP: Saw pod success
Sep  2 13:36:02.074: INFO: Pod "pod-subpath-test-preprovisionedpv-sv46" satisfied condition "Succeeded or Failed"
Sep  2 13:36:02.183: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-sv46 container test-container-subpath-preprovisionedpv-sv46: <nil>
STEP: delete the pod
Sep  2 13:36:02.408: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sv46 to disappear
Sep  2 13:36:02.517: INFO: Pod pod-subpath-test-preprovisionedpv-sv46 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sv46
Sep  2 13:36:02.517: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sv46" in namespace "provisioning-2527"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":38,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:05.411: INFO: Only supported for providers [gce gke] (not aws)
... skipping 78 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":2,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:08.091: INFO: Only supported for providers [vsphere] (not aws)
... skipping 72 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 41 lines ...
STEP: Destroying namespace "services-3575" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:09.091: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:10.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2154" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:10.464: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 202 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:10.823: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Creating a kubernetes client
Sep  2 13:34:58.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
W0902 13:34:59.669524    4838 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep  2 13:34:59.669: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-7439" for this suite.


• [SLOW TEST:72.334 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:73.470 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:319
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:12.181: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 181 lines ...
• [SLOW TEST:58.198 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":4,"skipped":51,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:12.530: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should delete a collection of pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":4,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:12.623: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 33 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 107 lines ...
STEP: Destroying namespace "apply-8412" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":2,"skipped":17,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:14.201: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:14.304: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 175 lines ...
Sep  2 13:36:01.943: INFO: PersistentVolumeClaim pvc-qkw8f found but phase is Pending instead of Bound.
Sep  2 13:36:04.053: INFO: PersistentVolumeClaim pvc-qkw8f found and phase=Bound (12.775410282s)
Sep  2 13:36:04.053: INFO: Waiting up to 3m0s for PersistentVolume local-qm6l8 to have phase Bound
Sep  2 13:36:04.163: INFO: PersistentVolume local-qm6l8 found and phase=Bound (109.787379ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qnh7
STEP: Creating a pod to test subpath
Sep  2 13:36:04.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qnh7" in namespace "provisioning-7360" to be "Succeeded or Failed"
Sep  2 13:36:04.610: INFO: Pod "pod-subpath-test-preprovisionedpv-qnh7": Phase="Pending", Reason="", readiness=false. Elapsed: 114.7875ms
Sep  2 13:36:06.789: INFO: Pod "pod-subpath-test-preprovisionedpv-qnh7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294034899s
Sep  2 13:36:08.900: INFO: Pod "pod-subpath-test-preprovisionedpv-qnh7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.404792302s
Sep  2 13:36:11.011: INFO: Pod "pod-subpath-test-preprovisionedpv-qnh7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516375989s
Sep  2 13:36:13.124: INFO: Pod "pod-subpath-test-preprovisionedpv-qnh7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.628748945s
STEP: Saw pod success
Sep  2 13:36:13.124: INFO: Pod "pod-subpath-test-preprovisionedpv-qnh7" satisfied condition "Succeeded or Failed"
Sep  2 13:36:13.235: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-qnh7 container test-container-subpath-preprovisionedpv-qnh7: <nil>
STEP: delete the pod
Sep  2 13:36:13.465: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qnh7 to disappear
Sep  2 13:36:13.575: INFO: Pod pod-subpath-test-preprovisionedpv-qnh7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qnh7
Sep  2 13:36:13.575: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qnh7" in namespace "provisioning-7360"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":3,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:15.144: INFO: Only supported for providers [gce gke] (not aws)
... skipping 225 lines ...
• [SLOW TEST:5.560 seconds]
[sig-api-machinery] Generated clientset
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:103
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:35:16.023: INFO: >>> kubeConfig: /root/.kube/config
... skipping 153 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:16.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Sep  2 13:36:20.196: INFO: Creating a PV followed by a PVC
Sep  2 13:36:20.417: INFO: Waiting for PV local-pvd2z94 to bind to PVC pvc-x9d6s
Sep  2 13:36:20.417: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-x9d6s] to have phase Bound
Sep  2 13:36:20.526: INFO: PersistentVolumeClaim pvc-x9d6s found and phase=Bound (109.077523ms)
Sep  2 13:36:20.526: INFO: Waiting up to 3m0s for PersistentVolume local-pvd2z94 to have phase Bound
Sep  2 13:36:20.636: INFO: PersistentVolume local-pvd2z94 found and phase=Bound (109.737852ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
Sep  2 13:36:20.966: INFO: Waiting up to 5m0s for pod "pod-fb6f438d-b136-4565-9221-3ca318860cb2" in namespace "persistent-local-volumes-test-2192" to be "Unschedulable"
Sep  2 13:36:21.076: INFO: Pod "pod-fb6f438d-b136-4565-9221-3ca318860cb2": Phase="Pending", Reason="", readiness=false. Elapsed: 109.821528ms
Sep  2 13:36:21.076: INFO: Pod "pod-fb6f438d-b136-4565-9221-3ca318860cb2" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:6.097 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":5,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:10.200 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update/patch PodDisruptionBudget status [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":5,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:22.878: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
Sep  2 13:36:01.261: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:01.401: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:01.730: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:01.849: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:01.958: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:02.067: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:02.286: INFO: Lookups using dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local]

Sep  2 13:36:07.395: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:07.505: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:07.617: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:07.726: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:08.063: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:08.172: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:08.281: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:08.390: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:08.608: INFO: Lookups using dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local]

Sep  2 13:36:12.396: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:12.505: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:12.617: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:12.732: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:13.065: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:13.175: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:13.285: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:13.394: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:13.620: INFO: Lookups using dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local]

Sep  2 13:36:17.398: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:17.528: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:17.664: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:17.786: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:18.123: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:18.232: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:18.344: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:18.457: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local from pod dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348: the server could not find the requested resource (get pods dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348)
Sep  2 13:36:18.681: INFO: Lookups using dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1558.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1558.svc.cluster.local jessie_udp@dns-test-service-2.dns-1558.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1558.svc.cluster.local]

Sep  2 13:36:23.605: INFO: DNS probes using dns-1558/dns-test-58d4ac93-08cd-4774-8912-0ff72c04a348 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:36.286 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:24.092: INFO: Only supported for providers [vsphere] (not aws)
... skipping 73 lines ...
Sep  2 13:36:24.783: INFO: AfterEach: Cleaning up test resources.
Sep  2 13:36:24.783: INFO: Deleting PersistentVolumeClaim "pvc-pqdqn"
Sep  2 13:36:24.891: INFO: Deleting PersistentVolume "hostpath-j46bn"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":6,"skipped":90,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:25.009: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:19.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:36:20.528: INFO: Waiting up to 5m0s for pod "metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d" in namespace "downward-api-3823" to be "Succeeded or Failed"
Sep  2 13:36:20.636: INFO: Pod "metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d": Phase="Pending", Reason="", readiness=false. Elapsed: 107.807094ms
Sep  2 13:36:22.745: INFO: Pod "metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217196227s
Sep  2 13:36:24.854: INFO: Pod "metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.326523727s
STEP: Saw pod success
Sep  2 13:36:24.854: INFO: Pod "metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d" satisfied condition "Succeeded or Failed"
Sep  2 13:36:24.963: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d container client-container: <nil>
STEP: delete the pod
Sep  2 13:36:25.205: INFO: Waiting for pod metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d to disappear
Sep  2 13:36:25.313: INFO: Pod metadata-volume-bac1d8a8-110d-4556-b76e-20b04d27482d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.657 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Sep  2 13:36:16.869: INFO: PersistentVolumeClaim pvc-9lr9b found but phase is Pending instead of Bound.
Sep  2 13:36:18.989: INFO: PersistentVolumeClaim pvc-9lr9b found and phase=Bound (2.257427265s)
Sep  2 13:36:18.990: INFO: Waiting up to 3m0s for PersistentVolume local-2t74j to have phase Bound
Sep  2 13:36:19.112: INFO: PersistentVolume local-2t74j found and phase=Bound (122.098259ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fcjw
STEP: Creating a pod to test subpath
Sep  2 13:36:19.458: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fcjw" in namespace "provisioning-3337" to be "Succeeded or Failed"
Sep  2 13:36:19.570: INFO: Pod "pod-subpath-test-preprovisionedpv-fcjw": Phase="Pending", Reason="", readiness=false. Elapsed: 111.932658ms
Sep  2 13:36:21.681: INFO: Pod "pod-subpath-test-preprovisionedpv-fcjw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223484232s
Sep  2 13:36:23.807: INFO: Pod "pod-subpath-test-preprovisionedpv-fcjw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.349177018s
STEP: Saw pod success
Sep  2 13:36:23.807: INFO: Pod "pod-subpath-test-preprovisionedpv-fcjw" satisfied condition "Succeeded or Failed"
Sep  2 13:36:23.920: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fcjw container test-container-volume-preprovisionedpv-fcjw: <nil>
STEP: delete the pod
Sep  2 13:36:24.147: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fcjw to disappear
Sep  2 13:36:24.259: INFO: Pod pod-subpath-test-preprovisionedpv-fcjw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fcjw
Sep  2 13:36:24.259: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fcjw" in namespace "provisioning-3337"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":75,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:25.995: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 47 lines ...
STEP: SSH'ing host 18.192.100.44:22
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
STEP: SSH'ing host 18.192.100.44:22
Sep  2 13:36:22.088: INFO: Got stdout from 18.192.100.44:22: stdout
Sep  2 13:36:22.088: INFO: Got stderr from 18.192.100.44:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ubuntu@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:27.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-3754" for this suite.


• [SLOW TEST:19.121 seconds]
[sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":4,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:27.314: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:27.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2824" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":7,"skipped":93,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:12.386 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:27.618: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Sep  2 13:36:22.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep  2 13:36:23.073: INFO: Waiting up to 5m0s for pod "pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b" in namespace "emptydir-3343" to be "Succeeded or Failed"
Sep  2 13:36:23.183: INFO: Pod "pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b": Phase="Pending", Reason="", readiness=false. Elapsed: 109.708736ms
Sep  2 13:36:25.293: INFO: Pod "pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220089762s
Sep  2 13:36:27.404: INFO: Pod "pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33057547s
STEP: Saw pod success
Sep  2 13:36:27.404: INFO: Pod "pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b" satisfied condition "Succeeded or Failed"
Sep  2 13:36:27.514: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b container test-container: <nil>
STEP: delete the pod
Sep  2 13:36:27.746: INFO: Waiting for pod pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b to disappear
Sep  2 13:36:27.855: INFO: Pod pod-4e2d0e2b-d71d-4141-b38d-e2b8f8dd852b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.670 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:28.109: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 115 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:202
STEP: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:30.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-274" for this suite.

... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:30.363: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
Sep  2 13:36:17.819: INFO: PersistentVolumeClaim pvc-7vdsv found but phase is Pending instead of Bound.
Sep  2 13:36:19.929: INFO: PersistentVolumeClaim pvc-7vdsv found and phase=Bound (8.557016609s)
Sep  2 13:36:19.929: INFO: Waiting up to 3m0s for PersistentVolume local-z5mvf to have phase Bound
Sep  2 13:36:20.039: INFO: PersistentVolume local-z5mvf found and phase=Bound (109.768304ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ddnd
STEP: Creating a pod to test subpath
Sep  2 13:36:20.370: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ddnd" in namespace "provisioning-1894" to be "Succeeded or Failed"
Sep  2 13:36:20.481: INFO: Pod "pod-subpath-test-preprovisionedpv-ddnd": Phase="Pending", Reason="", readiness=false. Elapsed: 111.119263ms
Sep  2 13:36:22.592: INFO: Pod "pod-subpath-test-preprovisionedpv-ddnd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222090415s
Sep  2 13:36:24.705: INFO: Pod "pod-subpath-test-preprovisionedpv-ddnd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335486326s
Sep  2 13:36:26.825: INFO: Pod "pod-subpath-test-preprovisionedpv-ddnd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454590631s
Sep  2 13:36:28.934: INFO: Pod "pod-subpath-test-preprovisionedpv-ddnd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.564522358s
STEP: Saw pod success
Sep  2 13:36:28.935: INFO: Pod "pod-subpath-test-preprovisionedpv-ddnd" satisfied condition "Succeeded or Failed"
Sep  2 13:36:29.044: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-ddnd container test-container-subpath-preprovisionedpv-ddnd: <nil>
STEP: delete the pod
Sep  2 13:36:29.294: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ddnd to disappear
Sep  2 13:36:29.404: INFO: Pod pod-subpath-test-preprovisionedpv-ddnd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ddnd
Sep  2 13:36:29.404: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ddnd" in namespace "provisioning-1894"
... skipping 63 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:36:26.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb" in namespace "projected-1089" to be "Succeeded or Failed"
Sep  2 13:36:26.326: INFO: Pod "downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 108.399239ms
Sep  2 13:36:28.436: INFO: Pod "downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21794164s
Sep  2 13:36:30.548: INFO: Pod "downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329900259s
STEP: Saw pod success
Sep  2 13:36:30.548: INFO: Pod "downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb" satisfied condition "Succeeded or Failed"
Sep  2 13:36:30.656: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb container client-container: <nil>
STEP: delete the pod
Sep  2 13:36:30.880: INFO: Waiting for pod downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb to disappear
Sep  2 13:36:30.991: INFO: Pod downwardapi-volume-f3fc3145-61a3-426f-b1c6-f97723f61fbb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.665 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:31.225: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
Sep  2 13:36:18.102: INFO: PersistentVolumeClaim pvc-d87kd found but phase is Pending instead of Bound.
Sep  2 13:36:20.211: INFO: PersistentVolumeClaim pvc-d87kd found and phase=Bound (10.661573913s)
Sep  2 13:36:20.211: INFO: Waiting up to 3m0s for PersistentVolume local-c7v8s to have phase Bound
Sep  2 13:36:20.320: INFO: PersistentVolume local-c7v8s found and phase=Bound (108.924414ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kfdj
STEP: Creating a pod to test subpath
Sep  2 13:36:20.653: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kfdj" in namespace "provisioning-7606" to be "Succeeded or Failed"
Sep  2 13:36:20.763: INFO: Pod "pod-subpath-test-preprovisionedpv-kfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 109.704503ms
Sep  2 13:36:22.874: INFO: Pod "pod-subpath-test-preprovisionedpv-kfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220330312s
Sep  2 13:36:24.989: INFO: Pod "pod-subpath-test-preprovisionedpv-kfdj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335658949s
Sep  2 13:36:27.100: INFO: Pod "pod-subpath-test-preprovisionedpv-kfdj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.44640723s
STEP: Saw pod success
Sep  2 13:36:27.100: INFO: Pod "pod-subpath-test-preprovisionedpv-kfdj" satisfied condition "Succeeded or Failed"
Sep  2 13:36:27.210: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-kfdj container test-container-volume-preprovisionedpv-kfdj: <nil>
STEP: delete the pod
Sep  2 13:36:27.437: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kfdj to disappear
Sep  2 13:36:27.547: INFO: Pod pod-subpath-test-preprovisionedpv-kfdj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kfdj
Sep  2 13:36:27.547: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kfdj" in namespace "provisioning-7606"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":3,"skipped":29,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:31.518: INFO: Only supported for providers [gce gke] (not aws)
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-9wv6
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:36:01.138: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9wv6" in namespace "subpath-5369" to be "Succeeded or Failed"
Sep  2 13:36:01.261: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Pending", Reason="", readiness=false. Elapsed: 122.632754ms
Sep  2 13:36:03.371: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233037598s
Sep  2 13:36:05.481: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343515609s
Sep  2 13:36:07.595: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456619789s
Sep  2 13:36:09.705: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Running", Reason="", readiness=true. Elapsed: 8.567255321s
Sep  2 13:36:11.816: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Running", Reason="", readiness=true. Elapsed: 10.678067126s
... skipping 4 lines ...
Sep  2 13:36:22.379: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Running", Reason="", readiness=true. Elapsed: 21.240653305s
Sep  2 13:36:24.488: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Running", Reason="", readiness=true. Elapsed: 23.35053954s
Sep  2 13:36:26.599: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Running", Reason="", readiness=true. Elapsed: 25.46149699s
Sep  2 13:36:28.709: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Running", Reason="", readiness=true. Elapsed: 27.571212951s
Sep  2 13:36:30.820: INFO: Pod "pod-subpath-test-secret-9wv6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.681850502s
STEP: Saw pod success
Sep  2 13:36:30.820: INFO: Pod "pod-subpath-test-secret-9wv6" satisfied condition "Succeeded or Failed"
Sep  2 13:36:30.933: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-secret-9wv6 container test-container-subpath-secret-9wv6: <nil>
STEP: delete the pod
Sep  2 13:36:31.166: INFO: Waiting for pod pod-subpath-test-secret-9wv6 to disappear
Sep  2 13:36:31.275: INFO: Pod pod-subpath-test-secret-9wv6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-9wv6
Sep  2 13:36:31.275: INFO: Deleting pod "pod-subpath-test-secret-9wv6" in namespace "subpath-5369"
... skipping 37 lines ...
Sep  2 13:36:18.182: INFO: PersistentVolumeClaim pvc-6r8gt found but phase is Pending instead of Bound.
Sep  2 13:36:20.293: INFO: PersistentVolumeClaim pvc-6r8gt found and phase=Bound (10.665529878s)
Sep  2 13:36:20.293: INFO: Waiting up to 3m0s for PersistentVolume local-t99dh to have phase Bound
Sep  2 13:36:20.403: INFO: PersistentVolume local-t99dh found and phase=Bound (110.817604ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-d7nl
STEP: Creating a pod to test subpath
Sep  2 13:36:20.732: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-d7nl" in namespace "provisioning-841" to be "Succeeded or Failed"
Sep  2 13:36:20.840: INFO: Pod "pod-subpath-test-preprovisionedpv-d7nl": Phase="Pending", Reason="", readiness=false. Elapsed: 107.55891ms
Sep  2 13:36:22.951: INFO: Pod "pod-subpath-test-preprovisionedpv-d7nl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219121078s
Sep  2 13:36:25.068: INFO: Pod "pod-subpath-test-preprovisionedpv-d7nl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335733651s
Sep  2 13:36:27.177: INFO: Pod "pod-subpath-test-preprovisionedpv-d7nl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445137829s
Sep  2 13:36:29.295: INFO: Pod "pod-subpath-test-preprovisionedpv-d7nl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563199831s
Sep  2 13:36:31.404: INFO: Pod "pod-subpath-test-preprovisionedpv-d7nl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.672306473s
STEP: Saw pod success
Sep  2 13:36:31.404: INFO: Pod "pod-subpath-test-preprovisionedpv-d7nl" satisfied condition "Succeeded or Failed"
Sep  2 13:36:31.513: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-d7nl container test-container-subpath-preprovisionedpv-d7nl: <nil>
STEP: delete the pod
Sep  2 13:36:31.745: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-d7nl to disappear
Sep  2 13:36:31.852: INFO: Pod pod-subpath-test-preprovisionedpv-d7nl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-d7nl
Sep  2 13:36:31.852: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-d7nl" in namespace "provisioning-841"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:34.120: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
STEP: Wait for the deployment to be ready
Sep  2 13:36:28.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep  2 13:36:30.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63766186587, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep  2 13:36:33.541: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:34.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6101" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:9.109 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":6,"skipped":77,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:35.121: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 115 lines ...
• [SLOW TEST:24.827 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1321
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:39.168: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":3,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:30.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Sep  2 13:36:31.612: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-9136" to be "Succeeded or Failed"
Sep  2 13:36:31.723: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 110.437175ms
Sep  2 13:36:33.836: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223575658s
Sep  2 13:36:35.946: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333725996s
Sep  2 13:36:38.056: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443372047s
Sep  2 13:36:40.166: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553505167s
Sep  2 13:36:40.166: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:40.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9136" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-expansion 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-0a750c8f-13de-45d8-a0cd-848bca68ad00
STEP: Creating a pod to test consume secrets
Sep  2 13:36:32.029: INFO: Waiting up to 5m0s for pod "pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a" in namespace "secrets-2634" to be "Succeeded or Failed"
Sep  2 13:36:32.138: INFO: Pod "pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 108.496222ms
Sep  2 13:36:34.246: INFO: Pod "pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217000195s
Sep  2 13:36:36.355: INFO: Pod "pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326076418s
Sep  2 13:36:38.464: INFO: Pod "pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435241483s
Sep  2 13:36:40.574: INFO: Pod "pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.544518925s
STEP: Saw pod success
Sep  2 13:36:40.574: INFO: Pod "pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a" satisfied condition "Succeeded or Failed"
Sep  2 13:36:40.685: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a container secret-env-test: <nil>
STEP: delete the pod
Sep  2 13:36:40.908: INFO: Waiting for pod pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a to disappear
Sep  2 13:36:41.016: INFO: Pod pod-secrets-e859a317-f862-467f-b26c-d93d7565bb6a no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.986 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:41.255: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3114
STEP: Waiting until pod test-pod will start running in namespace statefulset-3114
STEP: Creating statefulset with conflicting port in namespace statefulset-3114
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3114
Sep  2 13:36:27.462: INFO: Observed stateful pod in namespace: statefulset-3114, name: ss-0, uid: 5325d27d-df48-4509-9a3c-61c011d70f70, status phase: Pending. Waiting for statefulset controller to delete.
Sep  2 13:36:27.581: INFO: Observed stateful pod in namespace: statefulset-3114, name: ss-0, uid: 5325d27d-df48-4509-9a3c-61c011d70f70, status phase: Failed. Waiting for statefulset controller to delete.
Sep  2 13:36:27.593: INFO: Observed stateful pod in namespace: statefulset-3114, name: ss-0, uid: 5325d27d-df48-4509-9a3c-61c011d70f70, status phase: Failed. Waiting for statefulset controller to delete.
Sep  2 13:36:27.595: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3114
STEP: Removing pod with conflicting port in namespace statefulset-3114
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3114 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Sep  2 13:36:32.038: INFO: Deleting all statefulset in ns statefulset-3114
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:43.261: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:43.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2804" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":7,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:47.943 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:239
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":2,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:43.515: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Sep  2 13:36:41.067: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  2 13:36:41.180: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-p8mr
STEP: Creating a pod to test subpath
Sep  2 13:36:41.295: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-p8mr" in namespace "provisioning-9959" to be "Succeeded or Failed"
Sep  2 13:36:41.405: INFO: Pod "pod-subpath-test-inlinevolume-p8mr": Phase="Pending", Reason="", readiness=false. Elapsed: 109.348344ms
Sep  2 13:36:43.515: INFO: Pod "pod-subpath-test-inlinevolume-p8mr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219952348s
Sep  2 13:36:45.626: INFO: Pod "pod-subpath-test-inlinevolume-p8mr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331020539s
STEP: Saw pod success
Sep  2 13:36:45.626: INFO: Pod "pod-subpath-test-inlinevolume-p8mr" satisfied condition "Succeeded or Failed"
Sep  2 13:36:45.736: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-p8mr container test-container-subpath-inlinevolume-p8mr: <nil>
STEP: delete the pod
Sep  2 13:36:45.975: INFO: Waiting for pod pod-subpath-test-inlinevolume-p8mr to disappear
Sep  2 13:36:46.085: INFO: Pod pod-subpath-test-inlinevolume-p8mr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-p8mr
Sep  2 13:36:46.085: INFO: Deleting pod "pod-subpath-test-inlinevolume-p8mr" in namespace "provisioning-9959"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Conntrack
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:35.961 seconds]
[sig-network] Conntrack
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:206
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":2,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:47.105: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 37 lines ...
• [SLOW TEST:17.049 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":7,"skipped":79,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:52.203: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-expansion  loopback local block volume should support online expansion on node","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:41.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
• [SLOW TEST:12.796 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:36:54.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-75" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":5,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:54.956: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 126 lines ...
Sep  2 13:36:32.034: INFO: PersistentVolumeClaim pvc-xc5lp found but phase is Pending instead of Bound.
Sep  2 13:36:34.145: INFO: PersistentVolumeClaim pvc-xc5lp found and phase=Bound (6.442902842s)
Sep  2 13:36:34.145: INFO: Waiting up to 3m0s for PersistentVolume local-xd7zp to have phase Bound
Sep  2 13:36:34.256: INFO: PersistentVolume local-xd7zp found and phase=Bound (111.046073ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tdmf
STEP: Creating a pod to test subpath
Sep  2 13:36:34.587: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tdmf" in namespace "provisioning-7929" to be "Succeeded or Failed"
Sep  2 13:36:34.696: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 109.46534ms
Sep  2 13:36:36.808: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220946323s
Sep  2 13:36:38.919: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331758774s
Sep  2 13:36:41.035: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448083491s
Sep  2 13:36:43.146: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.558624861s
Sep  2 13:36:45.256: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.668935398s
STEP: Saw pod success
Sep  2 13:36:45.256: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf" satisfied condition "Succeeded or Failed"
Sep  2 13:36:45.366: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tdmf container test-container-subpath-preprovisionedpv-tdmf: <nil>
STEP: delete the pod
Sep  2 13:36:45.594: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tdmf to disappear
Sep  2 13:36:45.704: INFO: Pod pod-subpath-test-preprovisionedpv-tdmf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tdmf
Sep  2 13:36:45.704: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tdmf" in namespace "provisioning-7929"
STEP: Creating pod pod-subpath-test-preprovisionedpv-tdmf
STEP: Creating a pod to test subpath
Sep  2 13:36:45.924: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tdmf" in namespace "provisioning-7929" to be "Succeeded or Failed"
Sep  2 13:36:46.034: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 109.386963ms
Sep  2 13:36:48.144: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219528419s
Sep  2 13:36:50.261: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336356965s
Sep  2 13:36:52.371: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.447174413s
STEP: Saw pod success
Sep  2 13:36:52.372: INFO: Pod "pod-subpath-test-preprovisionedpv-tdmf" satisfied condition "Succeeded or Failed"
Sep  2 13:36:52.482: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tdmf container test-container-subpath-preprovisionedpv-tdmf: <nil>
STEP: delete the pod
Sep  2 13:36:52.712: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tdmf to disappear
Sep  2 13:36:52.822: INFO: Pod pod-subpath-test-preprovisionedpv-tdmf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tdmf
Sep  2 13:36:52.822: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tdmf" in namespace "provisioning-7929"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":7,"skipped":45,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:36:56.783: INFO: Only supported for providers [gce gke] (not aws)
... skipping 462 lines ...
Sep  2 13:36:48.851: INFO: PersistentVolumeClaim pvc-ckqm8 found but phase is Pending instead of Bound.
Sep  2 13:36:50.961: INFO: PersistentVolumeClaim pvc-ckqm8 found and phase=Bound (10.658581017s)
Sep  2 13:36:50.961: INFO: Waiting up to 3m0s for PersistentVolume local-gzrw7 to have phase Bound
Sep  2 13:36:51.075: INFO: PersistentVolume local-gzrw7 found and phase=Bound (113.497184ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rc2f
STEP: Creating a pod to test subpath
Sep  2 13:36:51.497: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rc2f" in namespace "provisioning-4141" to be "Succeeded or Failed"
Sep  2 13:36:51.614: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 116.583641ms
Sep  2 13:36:53.724: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226351476s
Sep  2 13:36:55.834: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337179538s
Sep  2 13:36:57.944: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446841879s
STEP: Saw pod success
Sep  2 13:36:57.944: INFO: Pod "pod-subpath-test-preprovisionedpv-rc2f" satisfied condition "Succeeded or Failed"
Sep  2 13:36:58.053: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-rc2f container test-container-volume-preprovisionedpv-rc2f: <nil>
STEP: delete the pod
Sep  2 13:36:58.287: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rc2f to disappear
Sep  2 13:36:58.414: INFO: Pod pod-subpath-test-preprovisionedpv-rc2f no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rc2f
Sep  2 13:36:58.414: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rc2f" in namespace "provisioning-4141"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":73,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Sep  2 13:36:57.540: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8253" to be "Succeeded or Failed"
Sep  2 13:36:57.653: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 113.218637ms
Sep  2 13:36:59.765: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.224908276s
STEP: Saw pod success
Sep  2 13:36:59.765: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep  2 13:36:59.874: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep  2 13:37:00.101: INFO: Waiting for pod pod-host-path-test to disappear
Sep  2 13:37:00.209: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:00.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8253" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":5,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:00.443: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 30 lines ...
Sep  2 13:36:42.322: INFO: PersistentVolume nfs-sn2xf found and phase=Bound (109.151909ms)
Sep  2 13:36:42.436: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-znmd4] to have phase Bound
Sep  2 13:36:42.545: INFO: PersistentVolumeClaim pvc-znmd4 found and phase=Bound (108.661233ms)
STEP: Checking pod has write access to PersistentVolumes
Sep  2 13:36:42.653: INFO: Creating nfs test pod
Sep  2 13:36:42.770: INFO: Pod should terminate with exitcode 0 (success)
Sep  2 13:36:42.770: INFO: Waiting up to 5m0s for pod "pvc-tester-4lhh6" in namespace "pv-624" to be "Succeeded or Failed"
Sep  2 13:36:42.879: INFO: Pod "pvc-tester-4lhh6": Phase="Pending", Reason="", readiness=false. Elapsed: 108.738864ms
Sep  2 13:36:44.989: INFO: Pod "pvc-tester-4lhh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218351454s
STEP: Saw pod success
Sep  2 13:36:44.989: INFO: Pod "pvc-tester-4lhh6" satisfied condition "Succeeded or Failed"
Sep  2 13:36:44.989: INFO: Pod pvc-tester-4lhh6 succeeded 
Sep  2 13:36:44.989: INFO: Deleting pod "pvc-tester-4lhh6" in namespace "pv-624"
Sep  2 13:36:45.100: INFO: Wait up to 5m0s for pod "pvc-tester-4lhh6" to be fully deleted
Sep  2 13:36:45.318: INFO: Creating nfs test pod
Sep  2 13:36:45.427: INFO: Pod should terminate with exitcode 0 (success)
Sep  2 13:36:45.428: INFO: Waiting up to 5m0s for pod "pvc-tester-nzfr7" in namespace "pv-624" to be "Succeeded or Failed"
Sep  2 13:36:45.537: INFO: Pod "pvc-tester-nzfr7": Phase="Pending", Reason="", readiness=false. Elapsed: 109.10854ms
Sep  2 13:36:47.647: INFO: Pod "pvc-tester-nzfr7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219101664s
STEP: Saw pod success
Sep  2 13:36:47.647: INFO: Pod "pvc-tester-nzfr7" satisfied condition "Succeeded or Failed"
Sep  2 13:36:47.647: INFO: Pod pvc-tester-nzfr7 succeeded 
Sep  2 13:36:47.647: INFO: Deleting pod "pvc-tester-nzfr7" in namespace "pv-624"
Sep  2 13:36:47.767: INFO: Wait up to 5m0s for pod "pvc-tester-nzfr7" to be fully deleted
Sep  2 13:36:47.984: INFO: Creating nfs test pod
Sep  2 13:36:48.094: INFO: Pod should terminate with exitcode 0 (success)
Sep  2 13:36:48.094: INFO: Waiting up to 5m0s for pod "pvc-tester-8gv4z" in namespace "pv-624" to be "Succeeded or Failed"
Sep  2 13:36:48.203: INFO: Pod "pvc-tester-8gv4z": Phase="Pending", Reason="", readiness=false. Elapsed: 108.919246ms
Sep  2 13:36:50.321: INFO: Pod "pvc-tester-8gv4z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.226314291s
STEP: Saw pod success
Sep  2 13:36:50.321: INFO: Pod "pvc-tester-8gv4z" satisfied condition "Succeeded or Failed"
Sep  2 13:36:50.321: INFO: Pod pvc-tester-8gv4z succeeded 
Sep  2 13:36:50.321: INFO: Deleting pod "pvc-tester-8gv4z" in namespace "pv-624"
Sep  2 13:36:50.438: INFO: Wait up to 5m0s for pod "pvc-tester-8gv4z" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Sep  2 13:36:50.804: INFO: Deleting PVC pvc-skkwc to trigger reclamation of PV nfs-vzv6k
Sep  2 13:36:50.804: INFO: Deleting PersistentVolumeClaim "pvc-skkwc"
... skipping 62 lines ...
• [SLOW TEST:5.877 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":87,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:05.891: INFO: Only supported for providers [gce gke] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:31.614: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Sep  2 13:36:48.055: INFO: PersistentVolumeClaim pvc-6rwdz found but phase is Pending instead of Bound.
Sep  2 13:36:50.175: INFO: PersistentVolumeClaim pvc-6rwdz found and phase=Bound (8.563744562s)
Sep  2 13:36:50.175: INFO: Waiting up to 3m0s for PersistentVolume local-7wldr to have phase Bound
Sep  2 13:36:50.371: INFO: PersistentVolume local-7wldr found and phase=Bound (196.431275ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mqkd
STEP: Creating a pod to test subpath
Sep  2 13:36:50.703: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mqkd" in namespace "provisioning-7224" to be "Succeeded or Failed"
Sep  2 13:36:50.812: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Pending", Reason="", readiness=false. Elapsed: 109.336039ms
Sep  2 13:36:52.922: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219671731s
Sep  2 13:36:55.035: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332161945s
Sep  2 13:36:57.145: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.442704191s
STEP: Saw pod success
Sep  2 13:36:57.145: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd" satisfied condition "Succeeded or Failed"
Sep  2 13:36:57.254: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-mqkd container test-container-subpath-preprovisionedpv-mqkd: <nil>
STEP: delete the pod
Sep  2 13:36:57.484: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mqkd to disappear
Sep  2 13:36:57.594: INFO: Pod pod-subpath-test-preprovisionedpv-mqkd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mqkd
Sep  2 13:36:57.594: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mqkd" in namespace "provisioning-7224"
STEP: Creating pod pod-subpath-test-preprovisionedpv-mqkd
STEP: Creating a pod to test subpath
Sep  2 13:36:57.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mqkd" in namespace "provisioning-7224" to be "Succeeded or Failed"
Sep  2 13:36:57.924: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Pending", Reason="", readiness=false. Elapsed: 109.282199ms
Sep  2 13:37:00.036: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221776462s
Sep  2 13:37:02.147: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33241043s
Sep  2 13:37:04.257: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.4423623s
STEP: Saw pod success
Sep  2 13:37:04.257: INFO: Pod "pod-subpath-test-preprovisionedpv-mqkd" satisfied condition "Succeeded or Failed"
Sep  2 13:37:04.366: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-mqkd container test-container-subpath-preprovisionedpv-mqkd: <nil>
STEP: delete the pod
Sep  2 13:37:04.591: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mqkd to disappear
Sep  2 13:37:04.700: INFO: Pod pod-subpath-test-preprovisionedpv-mqkd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mqkd
Sep  2 13:37:04.700: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mqkd" in namespace "provisioning-7224"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:06.237: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:06.311: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":7,"skipped":29,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:54.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:36:55.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91" in namespace "downward-api-40" to be "Succeeded or Failed"
Sep  2 13:36:55.624: INFO: Pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91": Phase="Pending", Reason="", readiness=false. Elapsed: 109.776857ms
Sep  2 13:36:57.734: INFO: Pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219717575s
Sep  2 13:36:59.845: INFO: Pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330345423s
Sep  2 13:37:01.957: INFO: Pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442704709s
Sep  2 13:37:04.067: INFO: Pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55281289s
Sep  2 13:37:06.178: INFO: Pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.663383637s
STEP: Saw pod success
Sep  2 13:37:06.178: INFO: Pod "downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91" satisfied condition "Succeeded or Failed"
Sep  2 13:37:06.288: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91 container client-container: <nil>
STEP: delete the pod
Sep  2 13:37:06.513: INFO: Waiting for pod downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91 to disappear
Sep  2 13:37:06.627: INFO: Pod downwardapi-volume-d9f7131a-03fd-4bec-a8dd-0486e5291b91 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.997 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":29,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":3,"skipped":20,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:58.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cross-namespace-pod-affinity
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
• [SLOW TEST:10.010 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:1423
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.","total":-1,"completed":4,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:08.049: INFO: Only supported for providers [vsphere] (not aws)
... skipping 34 lines ...
STEP: Destroying namespace "services-4920" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:08.964: INFO: Only supported for providers [vsphere] (not aws)
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:09.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1287" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":9,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:10.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4912" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":8,"skipped":93,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 135 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:408

    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:22.319: INFO: >>> kubeConfig: /root/.kube/config
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:25.114 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":5,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:11.666: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 17 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:11.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Sep  2 13:37:11.922: INFO: found topology map[topology.kubernetes.io/zone:eu-central-1a]
Sep  2 13:37:11.922: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep  2 13:37:11.922: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:13.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5427" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":6,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:14.032: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 50 lines ...
• [SLOW TEST:5.989 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":10,"skipped":31,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:15.795: INFO: Only supported for providers [gce gke] (not aws)
... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":5,"skipped":66,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:17.375: INFO: Only supported for providers [vsphere] (not aws)
... skipping 155 lines ...
Sep  2 13:36:47.695: INFO: PersistentVolumeClaim pvc-789ql found but phase is Pending instead of Bound.
Sep  2 13:36:49.804: INFO: PersistentVolumeClaim pvc-789ql found and phase=Bound (10.65119178s)
Sep  2 13:36:49.804: INFO: Waiting up to 3m0s for PersistentVolume local-8k8h2 to have phase Bound
Sep  2 13:36:49.911: INFO: PersistentVolume local-8k8h2 found and phase=Bound (107.455297ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k8qd
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:36:50.262: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k8qd" in namespace "provisioning-5983" to be "Succeeded or Failed"
Sep  2 13:36:50.380: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Pending", Reason="", readiness=false. Elapsed: 117.783226ms
Sep  2 13:36:52.488: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225243537s
Sep  2 13:36:54.616: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353811394s
Sep  2 13:36:56.725: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462667282s
Sep  2 13:36:58.834: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.571378645s
Sep  2 13:37:00.947: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Running", Reason="", readiness=true. Elapsed: 10.684345407s
... skipping 2 lines ...
Sep  2 13:37:07.273: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Running", Reason="", readiness=true. Elapsed: 17.010869735s
Sep  2 13:37:09.381: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Running", Reason="", readiness=true. Elapsed: 19.118981392s
Sep  2 13:37:11.491: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Running", Reason="", readiness=true. Elapsed: 21.228453941s
Sep  2 13:37:13.599: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Running", Reason="", readiness=true. Elapsed: 23.336855145s
Sep  2 13:37:15.708: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.446047472s
STEP: Saw pod success
Sep  2 13:37:15.708: INFO: Pod "pod-subpath-test-preprovisionedpv-k8qd" satisfied condition "Succeeded or Failed"
Sep  2 13:37:15.816: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-k8qd container test-container-subpath-preprovisionedpv-k8qd: <nil>
STEP: delete the pod
Sep  2 13:37:16.051: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k8qd to disappear
Sep  2 13:37:16.158: INFO: Pod pod-subpath-test-preprovisionedpv-k8qd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k8qd
Sep  2 13:37:16.158: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k8qd" in namespace "provisioning-5983"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:17.696: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 83 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:36:55.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:24.317 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:280
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":4,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:19.957: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 124 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:21.263: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-a0b89a78-1379-4165-9aec-8c02e7a35105
STEP: Creating a pod to test consume secrets
Sep  2 13:37:14.811: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235" in namespace "projected-6130" to be "Succeeded or Failed"
Sep  2 13:37:14.920: INFO: Pod "pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235": Phase="Pending", Reason="", readiness=false. Elapsed: 109.577652ms
Sep  2 13:37:17.031: INFO: Pod "pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220154328s
Sep  2 13:37:19.142: INFO: Pod "pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330739791s
Sep  2 13:37:21.252: INFO: Pod "pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441573557s
STEP: Saw pod success
Sep  2 13:37:21.253: INFO: Pod "pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235" satisfied condition "Succeeded or Failed"
Sep  2 13:37:21.363: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep  2 13:37:21.590: INFO: Waiting for pod pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235 to disappear
Sep  2 13:37:21.699: INFO: Pod pod-projected-secrets-15a483c3-58f2-41cd-bebc-85cda86cc235 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.883 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:21.940: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 65 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Sep  2 13:37:18.453: INFO: Waiting up to 5m0s for pod "busybox-user-0-b042eb53-1a40-46b6-bd0e-0d187a78c844" in namespace "security-context-test-9586" to be "Succeeded or Failed"
Sep  2 13:37:18.561: INFO: Pod "busybox-user-0-b042eb53-1a40-46b6-bd0e-0d187a78c844": Phase="Pending", Reason="", readiness=false. Elapsed: 107.793365ms
Sep  2 13:37:20.670: INFO: Pod "busybox-user-0-b042eb53-1a40-46b6-bd0e-0d187a78c844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216599552s
Sep  2 13:37:22.781: INFO: Pod "busybox-user-0-b042eb53-1a40-46b6-bd0e-0d187a78c844": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32786716s
Sep  2 13:37:24.890: INFO: Pod "busybox-user-0-b042eb53-1a40-46b6-bd0e-0d187a78c844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.436545137s
Sep  2 13:37:24.890: INFO: Pod "busybox-user-0-b042eb53-1a40-46b6-bd0e-0d187a78c844" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:24.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9586" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":93,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:19.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Sep  2 13:37:20.646: INFO: Waiting up to 5m0s for pod "var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36" in namespace "var-expansion-5835" to be "Succeeded or Failed"
Sep  2 13:37:20.755: INFO: Pod "var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36": Phase="Pending", Reason="", readiness=false. Elapsed: 108.497924ms
Sep  2 13:37:22.865: INFO: Pod "var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219197246s
Sep  2 13:37:24.975: INFO: Pod "var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329200785s
STEP: Saw pod success
Sep  2 13:37:24.976: INFO: Pod "var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36" satisfied condition "Succeeded or Failed"
Sep  2 13:37:25.089: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36 container dapi-container: <nil>
STEP: delete the pod
Sep  2 13:37:25.319: INFO: Waiting for pod var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36 to disappear
Sep  2 13:37:25.428: INFO: Pod var-expansion-b4aa8d24-ea84-4c3f-8f38-07793b919e36 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.664 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":5,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:473
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":5,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:25.795: INFO: Only supported for providers [azure] (not aws)
... skipping 123 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:27.355: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 14 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":9,"skipped":95,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:24.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:37:25.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19894731-e93d-4697-94be-d305e852a403" in namespace "downward-api-8155" to be "Succeeded or Failed"
Sep  2 13:37:25.553: INFO: Pod "downwardapi-volume-19894731-e93d-4697-94be-d305e852a403": Phase="Pending", Reason="", readiness=false. Elapsed: 109.114828ms
Sep  2 13:37:27.663: INFO: Pod "downwardapi-volume-19894731-e93d-4697-94be-d305e852a403": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219542913s
Sep  2 13:37:29.772: INFO: Pod "downwardapi-volume-19894731-e93d-4697-94be-d305e852a403": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328639637s
STEP: Saw pod success
Sep  2 13:37:29.773: INFO: Pod "downwardapi-volume-19894731-e93d-4697-94be-d305e852a403" satisfied condition "Succeeded or Failed"
Sep  2 13:37:29.881: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod downwardapi-volume-19894731-e93d-4697-94be-d305e852a403 container client-container: <nil>
STEP: delete the pod
Sep  2 13:37:30.113: INFO: Waiting for pod downwardapi-volume-19894731-e93d-4697-94be-d305e852a403 to disappear
Sep  2 13:37:30.228: INFO: Pod downwardapi-volume-19894731-e93d-4697-94be-d305e852a403 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.670 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":95,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:30.474: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","total":-1,"completed":8,"skipped":96,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:05.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":96,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:30.489: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
Sep  2 13:37:17.933: INFO: PersistentVolumeClaim pvc-gqbv4 found but phase is Pending instead of Bound.
Sep  2 13:37:20.044: INFO: PersistentVolumeClaim pvc-gqbv4 found and phase=Bound (4.327857221s)
Sep  2 13:37:20.044: INFO: Waiting up to 3m0s for PersistentVolume local-xs4mc to have phase Bound
Sep  2 13:37:20.155: INFO: PersistentVolume local-xs4mc found and phase=Bound (110.15481ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fmrx
STEP: Creating a pod to test subpath
Sep  2 13:37:20.486: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fmrx" in namespace "provisioning-4531" to be "Succeeded or Failed"
Sep  2 13:37:20.595: INFO: Pod "pod-subpath-test-preprovisionedpv-fmrx": Phase="Pending", Reason="", readiness=false. Elapsed: 108.285604ms
Sep  2 13:37:22.706: INFO: Pod "pod-subpath-test-preprovisionedpv-fmrx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219511051s
Sep  2 13:37:24.815: INFO: Pod "pod-subpath-test-preprovisionedpv-fmrx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328352341s
Sep  2 13:37:26.924: INFO: Pod "pod-subpath-test-preprovisionedpv-fmrx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437936103s
STEP: Saw pod success
Sep  2 13:37:26.925: INFO: Pod "pod-subpath-test-preprovisionedpv-fmrx" satisfied condition "Succeeded or Failed"
Sep  2 13:37:27.035: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fmrx container test-container-subpath-preprovisionedpv-fmrx: <nil>
STEP: delete the pod
Sep  2 13:37:27.312: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fmrx to disappear
Sep  2 13:37:27.422: INFO: Pod pod-subpath-test-preprovisionedpv-fmrx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fmrx
Sep  2 13:37:27.422: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fmrx" in namespace "provisioning-4531"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":37,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:30.599: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 42 lines ...
• [SLOW TEST:5.611 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":45,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:31.311: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Sep  2 13:37:25.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep  2 13:37:26.509: INFO: Waiting up to 5m0s for pod "downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa" in namespace "downward-api-2347" to be "Succeeded or Failed"
Sep  2 13:37:26.619: INFO: Pod "downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa": Phase="Pending", Reason="", readiness=false. Elapsed: 110.002019ms
Sep  2 13:37:28.729: INFO: Pod "downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220083173s
Sep  2 13:37:30.839: INFO: Pod "downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.330230631s
STEP: Saw pod success
Sep  2 13:37:30.839: INFO: Pod "downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa" satisfied condition "Succeeded or Failed"
Sep  2 13:37:30.948: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa container dapi-container: <nil>
STEP: delete the pod
Sep  2 13:37:31.177: INFO: Waiting for pod downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa to disappear
Sep  2 13:37:31.287: INFO: Pod downward-api-322ae500-b449-4ecf-aef7-6b8e7ea151fa no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 38 lines ...
Sep  2 13:37:18.752: INFO: PersistentVolumeClaim pvc-hqfqj found but phase is Pending instead of Bound.
Sep  2 13:37:20.863: INFO: PersistentVolumeClaim pvc-hqfqj found and phase=Bound (12.765883936s)
Sep  2 13:37:20.863: INFO: Waiting up to 3m0s for PersistentVolume local-krl54 to have phase Bound
Sep  2 13:37:20.973: INFO: PersistentVolume local-krl54 found and phase=Bound (109.488557ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2mxs
STEP: Creating a pod to test subpath
Sep  2 13:37:21.304: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2mxs" in namespace "provisioning-1596" to be "Succeeded or Failed"
Sep  2 13:37:21.412: INFO: Pod "pod-subpath-test-preprovisionedpv-2mxs": Phase="Pending", Reason="", readiness=false. Elapsed: 108.566504ms
Sep  2 13:37:23.522: INFO: Pod "pod-subpath-test-preprovisionedpv-2mxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217829671s
Sep  2 13:37:25.631: INFO: Pod "pod-subpath-test-preprovisionedpv-2mxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326904581s
Sep  2 13:37:27.744: INFO: Pod "pod-subpath-test-preprovisionedpv-2mxs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.440531017s
STEP: Saw pod success
Sep  2 13:37:27.744: INFO: Pod "pod-subpath-test-preprovisionedpv-2mxs" satisfied condition "Succeeded or Failed"
Sep  2 13:37:27.853: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-2mxs container test-container-volume-preprovisionedpv-2mxs: <nil>
STEP: delete the pod
Sep  2 13:37:28.088: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2mxs to disappear
Sep  2 13:37:28.197: INFO: Pod pod-subpath-test-preprovisionedpv-2mxs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2mxs
Sep  2 13:37:28.197: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2mxs" in namespace "provisioning-1596"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":6,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:25.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Sep  2 13:37:25.771: INFO: Waiting up to 5m0s for pod "downward-api-437111c6-1d82-4ee5-8307-276a46432704" in namespace "downward-api-4793" to be "Succeeded or Failed"
Sep  2 13:37:25.878: INFO: Pod "downward-api-437111c6-1d82-4ee5-8307-276a46432704": Phase="Pending", Reason="", readiness=false. Elapsed: 107.116673ms
Sep  2 13:37:27.986: INFO: Pod "downward-api-437111c6-1d82-4ee5-8307-276a46432704": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215177329s
Sep  2 13:37:30.093: INFO: Pod "downward-api-437111c6-1d82-4ee5-8307-276a46432704": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322479151s
Sep  2 13:37:32.201: INFO: Pod "downward-api-437111c6-1d82-4ee5-8307-276a46432704": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.430705002s
STEP: Saw pod success
Sep  2 13:37:32.201: INFO: Pod "downward-api-437111c6-1d82-4ee5-8307-276a46432704" satisfied condition "Succeeded or Failed"
Sep  2 13:37:32.309: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod downward-api-437111c6-1d82-4ee5-8307-276a46432704 container dapi-container: <nil>
STEP: delete the pod
Sep  2 13:37:32.541: INFO: Waiting for pod downward-api-437111c6-1d82-4ee5-8307-276a46432704 to disappear
Sep  2 13:37:32.650: INFO: Pod downward-api-437111c6-1d82-4ee5-8307-276a46432704 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.749 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":8,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:32.885: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:31.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:32.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-396" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":7,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:32.974: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 187 lines ...
Sep  2 13:37:13.437: INFO: PersistentVolume nfs-l6nt8 found and phase=Bound (109.422187ms)
Sep  2 13:37:13.546: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-tzn5t] to have phase Bound
Sep  2 13:37:13.656: INFO: PersistentVolumeClaim pvc-tzn5t found and phase=Bound (109.43967ms)
STEP: Checking pod has write access to PersistentVolumes
Sep  2 13:37:13.766: INFO: Creating nfs test pod
Sep  2 13:37:13.877: INFO: Pod should terminate with exitcode 0 (success)
Sep  2 13:37:13.877: INFO: Waiting up to 5m0s for pod "pvc-tester-7j7wn" in namespace "pv-552" to be "Succeeded or Failed"
Sep  2 13:37:13.987: INFO: Pod "pvc-tester-7j7wn": Phase="Pending", Reason="", readiness=false. Elapsed: 110.134609ms
Sep  2 13:37:16.097: INFO: Pod "pvc-tester-7j7wn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220332015s
STEP: Saw pod success
Sep  2 13:37:16.097: INFO: Pod "pvc-tester-7j7wn" satisfied condition "Succeeded or Failed"
Sep  2 13:37:16.097: INFO: Pod pvc-tester-7j7wn succeeded 
Sep  2 13:37:16.097: INFO: Deleting pod "pvc-tester-7j7wn" in namespace "pv-552"
Sep  2 13:37:16.218: INFO: Wait up to 5m0s for pod "pvc-tester-7j7wn" to be fully deleted
Sep  2 13:37:16.437: INFO: Creating nfs test pod
Sep  2 13:37:16.549: INFO: Pod should terminate with exitcode 0 (success)
Sep  2 13:37:16.549: INFO: Waiting up to 5m0s for pod "pvc-tester-hghl5" in namespace "pv-552" to be "Succeeded or Failed"
Sep  2 13:37:16.658: INFO: Pod "pvc-tester-hghl5": Phase="Pending", Reason="", readiness=false. Elapsed: 109.600422ms
Sep  2 13:37:18.769: INFO: Pod "pvc-tester-hghl5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220249566s
Sep  2 13:37:20.881: INFO: Pod "pvc-tester-hghl5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332376981s
Sep  2 13:37:22.992: INFO: Pod "pvc-tester-hghl5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443117363s
Sep  2 13:37:25.102: INFO: Pod "pvc-tester-hghl5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553866743s
STEP: Saw pod success
Sep  2 13:37:25.103: INFO: Pod "pvc-tester-hghl5" satisfied condition "Succeeded or Failed"
Sep  2 13:37:25.103: INFO: Pod pvc-tester-hghl5 succeeded 
Sep  2 13:37:25.103: INFO: Deleting pod "pvc-tester-hghl5" in namespace "pv-552"
Sep  2 13:37:25.224: INFO: Wait up to 5m0s for pod "pvc-tester-hghl5" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Sep  2 13:37:25.777: INFO: Deleting PVC pvc-rhssz to trigger reclamation of PV nfs-dxhlz
Sep  2 13:37:25.777: INFO: Deleting PersistentVolumeClaim "pvc-rhssz"
... skipping 54 lines ...
      Driver csi-hostpath doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":8,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:33.691: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 208 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:34.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-641" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":9,"skipped":127,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:34.754: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 237 lines ...
• [SLOW TEST:64.792 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted by liveness probe after startup probe enables it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:377
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":6,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:44.022: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 199 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:478
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Sep  2 13:37:34.493: INFO: Waiting up to 5m0s for pod "pod-always-succeedc1551edd-d8a6-4601-823e-a7b7dd4c2982" in namespace "pods-3320" to be "Succeeded or Failed"
Sep  2 13:37:34.607: INFO: Pod "pod-always-succeedc1551edd-d8a6-4601-823e-a7b7dd4c2982": Phase="Pending", Reason="", readiness=false. Elapsed: 113.624767ms
Sep  2 13:37:36.717: INFO: Pod "pod-always-succeedc1551edd-d8a6-4601-823e-a7b7dd4c2982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224089519s
Sep  2 13:37:38.828: INFO: Pod "pod-always-succeedc1551edd-d8a6-4601-823e-a7b7dd4c2982": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335154291s
Sep  2 13:37:40.939: INFO: Pod "pod-always-succeedc1551edd-d8a6-4601-823e-a7b7dd4c2982": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445738965s
Sep  2 13:37:43.050: INFO: Pod "pod-always-succeedc1551edd-d8a6-4601-823e-a7b7dd4c2982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.556599845s
STEP: Saw pod success
Sep  2 13:37:43.050: INFO: Pod "pod-always-succeedc1551edd-d8a6-4601-823e-a7b7dd4c2982" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:45.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:476
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:482
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":9,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:45.510: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
Sep  2 13:37:33.084: INFO: PersistentVolumeClaim pvc-nfzs5 found but phase is Pending instead of Bound.
Sep  2 13:37:35.204: INFO: PersistentVolumeClaim pvc-nfzs5 found and phase=Bound (10.676158708s)
Sep  2 13:37:35.204: INFO: Waiting up to 3m0s for PersistentVolume local-vc22w to have phase Bound
Sep  2 13:37:35.320: INFO: PersistentVolume local-vc22w found and phase=Bound (115.087307ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gtg6
STEP: Creating a pod to test subpath
Sep  2 13:37:35.661: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gtg6" in namespace "provisioning-4919" to be "Succeeded or Failed"
Sep  2 13:37:35.771: INFO: Pod "pod-subpath-test-preprovisionedpv-gtg6": Phase="Pending", Reason="", readiness=false. Elapsed: 110.358108ms
Sep  2 13:37:37.887: INFO: Pod "pod-subpath-test-preprovisionedpv-gtg6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226142748s
Sep  2 13:37:39.998: INFO: Pod "pod-subpath-test-preprovisionedpv-gtg6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336700567s
Sep  2 13:37:42.107: INFO: Pod "pod-subpath-test-preprovisionedpv-gtg6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446281518s
STEP: Saw pod success
Sep  2 13:37:42.107: INFO: Pod "pod-subpath-test-preprovisionedpv-gtg6" satisfied condition "Succeeded or Failed"
Sep  2 13:37:42.217: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-gtg6 container test-container-subpath-preprovisionedpv-gtg6: <nil>
STEP: delete the pod
Sep  2 13:37:42.458: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gtg6 to disappear
Sep  2 13:37:42.567: INFO: Pod pod-subpath-test-preprovisionedpv-gtg6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gtg6
Sep  2 13:37:42.567: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gtg6" in namespace "provisioning-4919"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":36,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:46.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-532" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":10,"skipped":85,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:46.585: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 60 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":19,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:45.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-7686" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":7,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Sep  2 13:37:32.043: INFO: PersistentVolumeClaim pvc-cfz95 found but phase is Pending instead of Bound.
Sep  2 13:37:34.154: INFO: PersistentVolumeClaim pvc-cfz95 found and phase=Bound (10.671990874s)
Sep  2 13:37:34.154: INFO: Waiting up to 3m0s for PersistentVolume local-nd96s to have phase Bound
Sep  2 13:37:34.264: INFO: PersistentVolume local-nd96s found and phase=Bound (109.600429ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xdzt
STEP: Creating a pod to test subpath
Sep  2 13:37:34.611: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xdzt" in namespace "provisioning-5801" to be "Succeeded or Failed"
Sep  2 13:37:34.724: INFO: Pod "pod-subpath-test-preprovisionedpv-xdzt": Phase="Pending", Reason="", readiness=false. Elapsed: 113.56273ms
Sep  2 13:37:36.836: INFO: Pod "pod-subpath-test-preprovisionedpv-xdzt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224723229s
Sep  2 13:37:38.947: INFO: Pod "pod-subpath-test-preprovisionedpv-xdzt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335815573s
Sep  2 13:37:41.058: INFO: Pod "pod-subpath-test-preprovisionedpv-xdzt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446704199s
Sep  2 13:37:43.168: INFO: Pod "pod-subpath-test-preprovisionedpv-xdzt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557399851s
Sep  2 13:37:45.280: INFO: Pod "pod-subpath-test-preprovisionedpv-xdzt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.668997431s
STEP: Saw pod success
Sep  2 13:37:45.280: INFO: Pod "pod-subpath-test-preprovisionedpv-xdzt" satisfied condition "Succeeded or Failed"
Sep  2 13:37:45.392: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-xdzt container test-container-subpath-preprovisionedpv-xdzt: <nil>
STEP: delete the pod
Sep  2 13:37:45.638: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xdzt to disappear
Sep  2 13:37:45.757: INFO: Pod pod-subpath-test-preprovisionedpv-xdzt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xdzt
Sep  2 13:37:45.757: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xdzt" in namespace "provisioning-5801"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [sig-windows] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Sep  2 13:37:47.307: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 48 lines ...
• [SLOW TEST:14.637 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:47.680: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep  2 13:37:36.206: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  2 13:37:36.317: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-6nzq
STEP: Creating a pod to test subpath
Sep  2 13:37:36.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-6nzq" in namespace "provisioning-1328" to be "Succeeded or Failed"
Sep  2 13:37:36.536: INFO: Pod "pod-subpath-test-inlinevolume-6nzq": Phase="Pending", Reason="", readiness=false. Elapsed: 107.412206ms
Sep  2 13:37:38.647: INFO: Pod "pod-subpath-test-inlinevolume-6nzq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218159576s
Sep  2 13:37:40.756: INFO: Pod "pod-subpath-test-inlinevolume-6nzq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327779766s
Sep  2 13:37:42.867: INFO: Pod "pod-subpath-test-inlinevolume-6nzq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438624126s
Sep  2 13:37:44.976: INFO: Pod "pod-subpath-test-inlinevolume-6nzq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547357961s
Sep  2 13:37:47.085: INFO: Pod "pod-subpath-test-inlinevolume-6nzq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.65659347s
STEP: Saw pod success
Sep  2 13:37:47.085: INFO: Pod "pod-subpath-test-inlinevolume-6nzq" satisfied condition "Succeeded or Failed"
Sep  2 13:37:47.192: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-6nzq container test-container-subpath-inlinevolume-6nzq: <nil>
STEP: delete the pod
Sep  2 13:37:47.440: INFO: Waiting for pod pod-subpath-test-inlinevolume-6nzq to disappear
Sep  2 13:37:47.546: INFO: Pod pod-subpath-test-inlinevolume-6nzq no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-6nzq
Sep  2 13:37:47.546: INFO: Deleting pod "pod-subpath-test-inlinevolume-6nzq" in namespace "provisioning-1328"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":10,"skipped":153,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:47.996: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 88 lines ...
Sep  2 13:36:15.252: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-626
Sep  2 13:36:15.361: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-626
Sep  2 13:36:15.474: INFO: creating *v1.StatefulSet: csi-mock-volumes-626-6223/csi-mockplugin
Sep  2 13:36:15.586: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-626
Sep  2 13:36:15.695: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-626"
Sep  2 13:36:15.804: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-626 to register on node ip-172-20-61-191.eu-central-1.compute.internal
I0902 13:36:25.537446    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0902 13:36:25.645640    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-626","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0902 13:36:25.755169    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0902 13:36:25.863571    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0902 13:36:26.108737    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-626","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0902 13:36:27.210560    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-626"},"Error":"","FullError":null}
STEP: Creating pod
Sep  2 13:36:32.844: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep  2 13:36:32.956: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-x5gss] to have phase Bound
I0902 13:36:32.967201    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Sep  2 13:36:33.065: INFO: PersistentVolumeClaim pvc-x5gss found but phase is Pending instead of Bound.
I0902 13:36:33.075883    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca"}}},"Error":"","FullError":null}
Sep  2 13:36:35.175: INFO: PersistentVolumeClaim pvc-x5gss found and phase=Bound (2.218925561s)
I0902 13:36:37.052193    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0902 13:36:37.161642    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep  2 13:36:37.270: INFO: >>> kubeConfig: /root/.kube/config
I0902 13:36:38.005494    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca","storage.kubernetes.io/csiProvisionerIdentity":"1630589785917-8081-csi-mock-csi-mock-volumes-626"}},"Response":{},"Error":"","FullError":null}
I0902 13:36:38.521672    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0902 13:36:38.632843    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep  2 13:36:38.740: INFO: >>> kubeConfig: /root/.kube/config
Sep  2 13:36:39.491: INFO: >>> kubeConfig: /root/.kube/config
Sep  2 13:36:40.229: INFO: >>> kubeConfig: /root/.kube/config
I0902 13:36:40.955046    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca/globalmount","target_path":"/var/lib/kubelet/pods/bec793b5-92c4-4b5d-9128-45393474a6f5/volumes/kubernetes.io~csi/pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca","storage.kubernetes.io/csiProvisionerIdentity":"1630589785917-8081-csi-mock-csi-mock-volumes-626"}},"Response":{},"Error":"","FullError":null}
Sep  2 13:36:43.729: INFO: Deleting pod "pvc-volume-tester-8pmd7" in namespace "csi-mock-volumes-626"
Sep  2 13:36:43.839: INFO: Wait up to 5m0s for pod "pvc-volume-tester-8pmd7" to be fully deleted
Sep  2 13:36:44.415: INFO: >>> kubeConfig: /root/.kube/config
I0902 13:36:45.180520    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/bec793b5-92c4-4b5d-9128-45393474a6f5/volumes/kubernetes.io~csi/pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca/mount"},"Response":{},"Error":"","FullError":null}
I0902 13:36:45.325105    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0902 13:36:45.434217    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca/globalmount"},"Response":{},"Error":"","FullError":null}
I0902 13:36:46.196787    4849 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Sep  2 13:36:47.176: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-x5gss", GenerateName:"pvc-", Namespace:"csi-mock-volumes-626", SelfLink:"", UID:"d8caa890-937f-4a07-8e38-4a82ae6225ca", ResourceVersion:"7446", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186592, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001ad7a40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001ad7a58), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00359e630), VolumeMode:(*v1.PersistentVolumeMode)(0xc00359e640), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:36:47.177: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-x5gss", GenerateName:"pvc-", Namespace:"csi-mock-volumes-626", SelfLink:"", UID:"d8caa890-937f-4a07-8e38-4a82ae6225ca", ResourceVersion:"7447", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186592, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-626"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001ad7ab8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001ad7ad0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001ad7ae8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001ad7b00), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00359e670), VolumeMode:(*v1.PersistentVolumeMode)(0xc00359e680), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:36:47.177: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-x5gss", GenerateName:"pvc-", Namespace:"csi-mock-volumes-626", SelfLink:"", UID:"d8caa890-937f-4a07-8e38-4a82ae6225ca", ResourceVersion:"7458", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186592, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-626"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001709170), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001709188), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0017091a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0017091b8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca", StorageClassName:(*string)(0xc0030dbe40), VolumeMode:(*v1.PersistentVolumeMode)(0xc0030dbe50), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:36:47.177: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-x5gss", GenerateName:"pvc-", Namespace:"csi-mock-volumes-626", SelfLink:"", UID:"d8caa890-937f-4a07-8e38-4a82ae6225ca", ResourceVersion:"7459", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186592, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-626"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008ac690), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008ac6a8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008ac6c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008ac6d8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008ac6f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008ac708), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca", StorageClassName:(*string)(0xc000fe1ee0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fe1ef0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:36:47.177: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-x5gss", GenerateName:"pvc-", Namespace:"csi-mock-volumes-626", SelfLink:"", UID:"d8caa890-937f-4a07-8e38-4a82ae6225ca", ResourceVersion:"8114", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186592, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(0xc0008ac738), DeletionGracePeriodSeconds:(*int64)(0xc000b6e1f8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-626"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008ac750), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008ac768), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008ac780), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008ac798), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0008ac7b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0008ac7c8), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d8caa890-937f-4a07-8e38-4a82ae6225ca", StorageClassName:(*string)(0xc000fe1f60), VolumeMode:(*v1.PersistentVolumeMode)(0xc000fe1f80), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":2,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:48.201: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "apply-1550" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•SS
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":9,"skipped":64,"failed":0}

SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":7,"skipped":58,"failed":0}
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:48.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apf
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:50.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-7304" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":8,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:51.123: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Sep  2 13:37:48.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep  2 13:37:48.658: INFO: Waiting up to 5m0s for pod "security-context-07345b92-93aa-445a-bce5-d82968c67228" in namespace "security-context-8295" to be "Succeeded or Failed"
Sep  2 13:37:48.766: INFO: Pod "security-context-07345b92-93aa-445a-bce5-d82968c67228": Phase="Pending", Reason="", readiness=false. Elapsed: 108.163687ms
Sep  2 13:37:50.874: INFO: Pod "security-context-07345b92-93aa-445a-bce5-d82968c67228": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.216573359s
STEP: Saw pod success
Sep  2 13:37:50.874: INFO: Pod "security-context-07345b92-93aa-445a-bce5-d82968c67228" satisfied condition "Succeeded or Failed"
Sep  2 13:37:50.982: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod security-context-07345b92-93aa-445a-bce5-d82968c67228 container test-container: <nil>
STEP: delete the pod
Sep  2 13:37:51.204: INFO: Waiting for pod security-context-07345b92-93aa-445a-bce5-d82968c67228 to disappear
Sep  2 13:37:51.315: INFO: Pod security-context-07345b92-93aa-445a-bce5-d82968c67228 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:51.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-8295" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":11,"skipped":155,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:51.546: INFO: Only supported for providers [gce gke] (not aws)
... skipping 170 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should require VolumeAttach for drivers with attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment","total":-1,"completed":8,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:51.636: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 201 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source","total":-1,"completed":4,"skipped":56,"failed":0}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:49.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep  2 13:37:49.993: INFO: Waiting up to 5m0s for pod "test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861" in namespace "svcaccounts-1484" to be "Succeeded or Failed"
Sep  2 13:37:50.101: INFO: Pod "test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861": Phase="Pending", Reason="", readiness=false. Elapsed: 108.16398ms
Sep  2 13:37:52.210: INFO: Pod "test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216545104s
Sep  2 13:37:54.323: INFO: Pod "test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329339033s
Sep  2 13:37:56.432: INFO: Pod "test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.43874618s
STEP: Saw pod success
Sep  2 13:37:56.432: INFO: Pod "test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861" satisfied condition "Succeeded or Failed"
Sep  2 13:37:56.542: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861 container agnhost-container: <nil>
STEP: delete the pod
Sep  2 13:37:56.769: INFO: Waiting for pod test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861 to disappear
Sep  2 13:37:56.878: INFO: Pod test-pod-8fa21ffa-34da-42ad-a4a8-cee0b2bb2861 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.760 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:57.104: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 79 lines ...
Sep  2 13:37:17.048: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathvzwxs] to have phase Bound
Sep  2 13:37:17.158: INFO: PersistentVolumeClaim csi-hostpathvzwxs found but phase is Pending instead of Bound.
Sep  2 13:37:19.267: INFO: PersistentVolumeClaim csi-hostpathvzwxs found but phase is Pending instead of Bound.
Sep  2 13:37:21.377: INFO: PersistentVolumeClaim csi-hostpathvzwxs found and phase=Bound (4.328247333s)
STEP: Creating pod pod-subpath-test-dynamicpv-kp59
STEP: Creating a pod to test subpath
Sep  2 13:37:21.708: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-kp59" in namespace "provisioning-3510" to be "Succeeded or Failed"
Sep  2 13:37:21.819: INFO: Pod "pod-subpath-test-dynamicpv-kp59": Phase="Pending", Reason="", readiness=false. Elapsed: 110.513184ms
Sep  2 13:37:23.932: INFO: Pod "pod-subpath-test-dynamicpv-kp59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223510363s
Sep  2 13:37:26.041: INFO: Pod "pod-subpath-test-dynamicpv-kp59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333288871s
Sep  2 13:37:28.153: INFO: Pod "pod-subpath-test-dynamicpv-kp59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444969993s
Sep  2 13:37:30.263: INFO: Pod "pod-subpath-test-dynamicpv-kp59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555120381s
Sep  2 13:37:32.374: INFO: Pod "pod-subpath-test-dynamicpv-kp59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665434765s
Sep  2 13:37:34.490: INFO: Pod "pod-subpath-test-dynamicpv-kp59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.781835456s
STEP: Saw pod success
Sep  2 13:37:34.490: INFO: Pod "pod-subpath-test-dynamicpv-kp59" satisfied condition "Succeeded or Failed"
Sep  2 13:37:34.604: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-kp59 container test-container-subpath-dynamicpv-kp59: <nil>
STEP: delete the pod
Sep  2 13:37:34.869: INFO: Waiting for pod pod-subpath-test-dynamicpv-kp59 to disappear
Sep  2 13:37:34.980: INFO: Pod pod-subpath-test-dynamicpv-kp59 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-kp59
Sep  2 13:37:34.980: INFO: Deleting pod "pod-subpath-test-dynamicpv-kp59" in namespace "provisioning-3510"
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":3,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":10,"skipped":98,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:34.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 92 lines ...
• [SLOW TEST:10.713 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":11,"skipped":98,"failed":0}
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:58.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename node-lease-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:37:58.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-5300" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":12,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:37:59.143: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 106 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":5,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:37:51.798: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae" in namespace "projected-579" to be "Succeeded or Failed"
Sep  2 13:37:51.906: INFO: Pod "downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae": Phase="Pending", Reason="", readiness=false. Elapsed: 108.089756ms
Sep  2 13:37:54.023: INFO: Pod "downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224330718s
Sep  2 13:37:56.132: INFO: Pod "downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33313508s
Sep  2 13:37:58.241: INFO: Pod "downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442704826s
Sep  2 13:38:00.351: INFO: Pod "downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.552721795s
STEP: Saw pod success
Sep  2 13:38:00.351: INFO: Pod "downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae" satisfied condition "Succeeded or Failed"
Sep  2 13:38:00.460: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae container client-container: <nil>
STEP: delete the pod
Sep  2 13:38:00.689: INFO: Waiting for pod downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae to disappear
Sep  2 13:38:00.797: INFO: Pod downwardapi-volume-8a80b3f4-630a-46c0-84da-774a6bd7adae no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.881 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":61,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:49.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Sep  2 13:37:50.038: INFO: Waiting up to 5m0s for pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21" in namespace "var-expansion-3374" to be "Succeeded or Failed"
Sep  2 13:37:50.149: INFO: Pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21": Phase="Pending", Reason="", readiness=false. Elapsed: 110.773942ms
Sep  2 13:37:52.258: INFO: Pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21951613s
Sep  2 13:37:54.367: INFO: Pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329072031s
Sep  2 13:37:56.477: INFO: Pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438549976s
Sep  2 13:37:58.586: INFO: Pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547761304s
Sep  2 13:38:00.697: INFO: Pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.65886328s
STEP: Saw pod success
Sep  2 13:38:00.697: INFO: Pod "var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21" satisfied condition "Succeeded or Failed"
Sep  2 13:38:00.805: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21 container dapi-container: <nil>
STEP: delete the pod
Sep  2 13:38:01.028: INFO: Waiting for pod var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21 to disappear
Sep  2 13:38:01.136: INFO: Pod var-expansion-2c652eda-b323-40e4-aa9d-807a593b3f21 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.985 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:7.443 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":5,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:01.370: INFO: Only supported for providers [vsphere] (not aws)
... skipping 49 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 204 lines ...
• [SLOW TEST:35.038 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete jobs and pods created by cronjob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1155
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob","total":-1,"completed":7,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:57.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Sep  2 13:37:57.778: INFO: Waiting up to 5m0s for pod "pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8" in namespace "emptydir-4037" to be "Succeeded or Failed"
Sep  2 13:37:57.886: INFO: Pod "pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8": Phase="Pending", Reason="", readiness=false. Elapsed: 107.667892ms
Sep  2 13:37:59.995: INFO: Pod "pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216156699s
Sep  2 13:38:02.111: INFO: Pod "pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332761295s
STEP: Saw pod success
Sep  2 13:38:02.111: INFO: Pod "pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8" satisfied condition "Succeeded or Failed"
Sep  2 13:38:02.220: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8 container test-container: <nil>
STEP: delete the pod
Sep  2 13:38:02.449: INFO: Waiting for pod pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8 to disappear
Sep  2 13:38:02.560: INFO: Pod pod-c4ffe2a7-c280-4e32-aad8-20a700b5e6e8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.662 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":49,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:02.818: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:37:58.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b" in namespace "downward-api-2019" to be "Succeeded or Failed"
Sep  2 13:37:58.515: INFO: Pod "downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 110.279382ms
Sep  2 13:38:00.625: INFO: Pod "downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219683235s
Sep  2 13:38:02.734: INFO: Pod "downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329049342s
STEP: Saw pod success
Sep  2 13:38:02.734: INFO: Pod "downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b" satisfied condition "Succeeded or Failed"
Sep  2 13:38:02.843: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b container client-container: <nil>
STEP: delete the pod
Sep  2 13:38:03.075: INFO: Waiting for pod downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b to disappear
Sep  2 13:38:03.187: INFO: Pod downwardapi-volume-ad021652-3b49-414d-ac52-104fa1a1dc7b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.661 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
Sep  2 13:38:02.024: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep  2 13:38:02.024: INFO: stdout: "scheduler etcd-0 controller-manager etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Sep  2 13:38:02.024: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7057 get componentstatuses scheduler'
Sep  2 13:38:02.424: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep  2 13:38:02.424: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Sep  2 13:38:02.424: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7057 get componentstatuses etcd-0'
Sep  2 13:38:02.835: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep  2 13:38:02.835: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-0   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
STEP: getting status of controller-manager
Sep  2 13:38:02.835: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7057 get componentstatuses controller-manager'
Sep  2 13:38:03.249: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep  2 13:38:03.249: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-1
Sep  2 13:38:03.249: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7057 get componentstatuses etcd-1'
Sep  2 13:38:03.657: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Sep  2 13:38:03.657: INFO: stdout: "NAME     STATUS    MESSAGE                         ERROR\netcd-1   Healthy   {\"health\":\"true\",\"reason\":\"\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:38:03.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7057" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":10,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:38:04.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6408" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":10,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:04.661: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-93e6420f-6201-41fe-b5ae-0090f36b0353
STEP: Creating a pod to test consume configMaps
Sep  2 13:37:59.974: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb" in namespace "projected-6768" to be "Succeeded or Failed"
Sep  2 13:38:00.087: INFO: Pod "pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb": Phase="Pending", Reason="", readiness=false. Elapsed: 112.115941ms
Sep  2 13:38:02.198: INFO: Pod "pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223191s
Sep  2 13:38:04.307: INFO: Pod "pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332766356s
STEP: Saw pod success
Sep  2 13:38:04.307: INFO: Pod "pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb" satisfied condition "Succeeded or Failed"
Sep  2 13:38:04.416: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb container agnhost-container: <nil>
STEP: delete the pod
Sep  2 13:38:04.649: INFO: Waiting for pod pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb to disappear
Sep  2 13:38:04.758: INFO: Pod pod-projected-configmaps-19a211ac-16ae-41b6-bb74-61d680b7ffcb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.776 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":111,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 119 lines ...
Sep  2 13:37:46.881: INFO: PersistentVolumeClaim pvc-xj4dj found but phase is Pending instead of Bound.
Sep  2 13:37:48.992: INFO: PersistentVolumeClaim pvc-xj4dj found and phase=Bound (12.769102513s)
Sep  2 13:37:48.992: INFO: Waiting up to 3m0s for PersistentVolume local-42pm6 to have phase Bound
Sep  2 13:37:49.106: INFO: PersistentVolume local-42pm6 found and phase=Bound (113.776417ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-p5q7
STEP: Creating a pod to test subpath
Sep  2 13:37:49.432: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-p5q7" in namespace "provisioning-1186" to be "Succeeded or Failed"
Sep  2 13:37:49.541: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Pending", Reason="", readiness=false. Elapsed: 108.430995ms
Sep  2 13:37:51.654: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221974883s
Sep  2 13:37:53.764: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331379163s
Sep  2 13:37:55.876: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443993317s
Sep  2 13:37:57.986: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553254667s
Sep  2 13:38:00.095: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662846246s
Sep  2 13:38:02.214: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.781878597s
Sep  2 13:38:04.324: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.892104781s
STEP: Saw pod success
Sep  2 13:38:04.325: INFO: Pod "pod-subpath-test-preprovisionedpv-p5q7" satisfied condition "Succeeded or Failed"
Sep  2 13:38:04.434: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-p5q7 container test-container-subpath-preprovisionedpv-p5q7: <nil>
STEP: delete the pod
Sep  2 13:38:04.671: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-p5q7 to disappear
Sep  2 13:38:04.781: INFO: Pod pod-subpath-test-preprovisionedpv-p5q7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-p5q7
Sep  2 13:38:04.781: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-p5q7" in namespace "provisioning-1186"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:06.327: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 24 lines ...
• [SLOW TEST:54.111 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:03.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep  2 13:38:04.548: INFO: Waiting up to 5m0s for pod "security-context-e2d7862a-1883-4663-b9da-281e06e4e72e" in namespace "security-context-362" to be "Succeeded or Failed"
Sep  2 13:38:04.659: INFO: Pod "security-context-e2d7862a-1883-4663-b9da-281e06e4e72e": Phase="Pending", Reason="", readiness=false. Elapsed: 110.920457ms
Sep  2 13:38:06.774: INFO: Pod "security-context-e2d7862a-1883-4663-b9da-281e06e4e72e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.225693907s
STEP: Saw pod success
Sep  2 13:38:06.774: INFO: Pod "security-context-e2d7862a-1883-4663-b9da-281e06e4e72e" satisfied condition "Succeeded or Failed"
Sep  2 13:38:06.891: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod security-context-e2d7862a-1883-4663-b9da-281e06e4e72e container test-container: <nil>
STEP: delete the pod
Sep  2 13:38:07.113: INFO: Waiting for pod security-context-e2d7862a-1883-4663-b9da-281e06e4e72e to disappear
Sep  2 13:38:07.223: INFO: Pod security-context-e2d7862a-1883-4663-b9da-281e06e4e72e no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:38:07.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-362" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":11,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
• [SLOW TEST:9.465 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:12.934: INFO: Only supported for providers [vsphere] (not aws)
... skipping 110 lines ...
Sep  2 13:38:10.205: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  2 13:38:10.205: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5312 describe pod agnhost-primary-bhxh7'
Sep  2 13:38:10.839: INFO: stderr: ""
Sep  2 13:38:10.839: INFO: stdout: "Name:         agnhost-primary-bhxh7\nNamespace:    kubectl-5312\nPriority:     0\nNode:         ip-172-20-49-181.eu-central-1.compute.internal/172.20.49.181\nStart Time:   Thu, 02 Sep 2021 13:38:07 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.4.138\nIPs:\n  IP:           100.96.4.138\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://b0c835aaa55cd4a03a1a5c54b2fed9fd5623fc21a3a001f93d03145bea58e96a\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 02 Sep 2021 13:38:08 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kmx4r (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-kmx4r:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-5312/agnhost-primary-bhxh7 to ip-172-20-49-181.eu-central-1.compute.internal\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    2s    kubelet            Started container agnhost-primary\n"
Sep  2 13:38:10.839: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5312 describe rc agnhost-primary'
Sep  2 13:38:11.583: INFO: stderr: ""
Sep  2 13:38:11.583: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-5312\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-primary-bhxh7\n"
Sep  2 13:38:11.583: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5312 describe service agnhost-primary'
Sep  2 13:38:12.322: INFO: stderr: ""
Sep  2 13:38:12.322: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-5312\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.66.21.150\nIPs:               100.66.21.150\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.4.138:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep  2 13:38:12.432: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-5312 describe node ip-172-20-42-46.eu-central-1.compute.internal'
Sep  2 13:38:13.411: INFO: stderr: ""
Sep  2 13:38:13.411: INFO: stdout: "Name:               ip-172-20-42-46.eu-central-1.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=arm64\n                    beta.kubernetes.io/instance-type=m6g.large\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=eu-central-1\n                    failure-domain.beta.kubernetes.io/zone=eu-central-1a\n                    kops.k8s.io/instancegroup=nodes-eu-central-1a\n                    kubernetes.io/arch=arm64\n                    kubernetes.io/hostname=ip-172-20-42-46.eu-central-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=m6g.large\n                    topology.ebs.csi.aws.com/zone=eu-central-1a\n                    topology.hostpath.csi/node=ip-172-20-42-46.eu-central-1.compute.internal\n                    topology.kubernetes.io/region=eu-central-1\n                    topology.kubernetes.io/zone=eu-central-1a\nAnnotations:        csi.volume.kubernetes.io/nodeid:\n                      {\"csi-hostpath-provisioning-6619\":\"ip-172-20-42-46.eu-central-1.compute.internal\",\"csi-hostpath-provisioning-8104\":\"ip-172-20-42-46.eu-cen...\n                    io.cilium.network.ipv4-cilium-host: 100.96.3.207\n                    io.cilium.network.ipv4-health-ip: 100.96.3.157\n                    io.cilium.network.ipv4-pod-cidr: 100.96.3.0/24\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 02 Sep 2021 13:31:33 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-42-46.eu-central-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 02 Sep 2021 13:38:11 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 02 Sep 2021 13:32:16 +0000   Thu, 02 Sep 2021 13:32:16 +0000   CiliumIsUp                   Cilium is running on this node\n  MemoryPressure       False   Thu, 02 Sep 2021 13:37:43 +0000   Thu, 02 Sep 2021 13:31:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 02 Sep 2021 13:37:43 +0000   Thu, 02 Sep 2021 13:31:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 02 Sep 2021 13:37:43 +0000   Thu, 02 Sep 2021 13:31:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 02 Sep 2021 13:37:43 +0000   Thu, 02 Sep 2021 13:31:53 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.42.46\n  ExternalIP:   18.192.100.44\n  Hostname:     ip-172-20-42-46.eu-central-1.compute.internal\n  InternalDNS:  ip-172-20-42-46.eu-central-1.compute.internal\n  ExternalDNS:  ec2-18-192-100-44.eu-central-1.compute.amazonaws.com\nCapacity:\n  cpu:                2\n  ephemeral-storage:  48603264Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  hugepages-32Mi:     0\n  hugepages-64Ki:     0\n  memory:             7949784Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  44792768029\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  hugepages-32Mi:     0\n  hugepages-64Ki:     0\n  memory:             7847384Ki\n  pods:               110\nSystem Info:\n  Machine ID:                         3d7f417e430f4201835b4847fd77b154\n  System UUID:                        ec2ea6f9-1a22-f5c6-fbd0-2a33e66528c6\n  Boot ID:                            c46d2bb1-ddbb-4cb9-8213-46c66d7120aa\n  Kernel Version:                     5.11.0-1016-aws\n  OS Image:                           Ubuntu 20.04.3 LTS\n  Operating System:                   linux\n  Architecture:                       arm64\n  Container Runtime Version:          containerd://1.4.9\n  Kubelet Version:                    v1.23.0-alpha.1\n  Kube-Proxy Version:                 v1.23.0-alpha.1\nPodCIDR:                              100.96.3.0/24\nPodCIDRs:                             100.96.3.0/24\nProviderID:                           aws:///eu-central-1a/i-0473bf2ddc2972a75\nNon-terminated Pods:                  (19 in total)\n  Namespace                           Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                           ----                                                            ------------  ----------  ---------------  -------------  ---\n  apply-1550                          deployment-shared-unset-55bfccbb6c-8lgms                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s\n  apply-7686                          deployment-55649fd747-4w67b                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s\n  csi-mock-volumes-2880-1545          csi-mockplugin-0                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s\n  csi-mock-volumes-2880-1545          csi-mockplugin-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s\n  kube-system                         cilium-khszs                                                    100m (5%)     0 (0%)      128Mi (1%)       100Mi (1%)     6m40s\n  kube-system                         ebs-csi-node-drnd2                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m40s\n  nettest-7082                        netserver-0                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s\n  nettest-765                         netserver-0                                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s\n  persistent-local-volumes-test-5431  hostexec-ip-172-20-42-46.eu-central-1.compute.internal-n7p77    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\n  persistent-local-volumes-test-5431  pod-d946c59b-9adb-455f-bacd-635ab6361d18                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s\n  provisioning-6619-1365              csi-hostpathplugin-0                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s\n  provisioning-8104-3124              csi-hostpathplugin-0                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s\n  provisioning-8104                   pod-subpath-test-dynamicpv-5hnv                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s\n  services-4079                       affinity-nodeport-transition-zxzvd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s\n  volume-1802                         hostexec-ip-172-20-42-46.eu-central-1.compute.internal-zpk2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s\n  volume-1802                         local-injector                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\n  volume-4338-7727                    csi-hostpathplugin-0                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s\n  volume-4338                         hostpath-client                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s\n  volume-7655                         hostexec-ip-172-20-42-46.eu-central-1.compute.internal-kt4rg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                100m (5%)   0 (0%)\n  memory             128Mi (1%)  100Mi (1%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\n  hugepages-32Mi     0 (0%)      0 (0%)\n  hugepages-64Ki     0 (0%)      0 (0%)\nEvents:\n  Type     Reason                   Age                    From     Message\n  ----     ------                   ----                   ----     -------\n  Normal   Starting                 6m40s                  kubelet  Starting kubelet.\n  Warning  InvalidDiskCapacity      6m40s                  kubelet  invalid capacity 0 on image filesystem\n  Normal   NodeHasSufficientMemory  6m40s (x2 over 6m40s)  kubelet  Node ip-172-20-42-46.eu-central-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    6m40s (x2 over 6m40s)  kubelet  Node ip-172-20-42-46.eu-central-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     6m40s (x2 over 6m40s)  kubelet  Node ip-172-20-42-46.eu-central-1.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced  6m40s                  kubelet  Updated Node Allocatable limit across pods\n  Normal   NodeReady                6m20s                  kubelet  Node ip-172-20-42-46.eu-central-1.compute.internal status is now: NodeReady\n"
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1099
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":11,"skipped":113,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Sep  2 13:38:07.010: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep  2 13:38:07.010: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-cmhw
STEP: Creating a pod to test subpath
Sep  2 13:38:07.124: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-cmhw" in namespace "provisioning-3095" to be "Succeeded or Failed"
Sep  2 13:38:07.235: INFO: Pod "pod-subpath-test-inlinevolume-cmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 111.349988ms
Sep  2 13:38:09.346: INFO: Pod "pod-subpath-test-inlinevolume-cmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221728503s
Sep  2 13:38:11.458: INFO: Pod "pod-subpath-test-inlinevolume-cmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33385251s
Sep  2 13:38:13.568: INFO: Pod "pod-subpath-test-inlinevolume-cmhw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443946099s
Sep  2 13:38:15.678: INFO: Pod "pod-subpath-test-inlinevolume-cmhw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553907709s
STEP: Saw pod success
Sep  2 13:38:15.678: INFO: Pod "pod-subpath-test-inlinevolume-cmhw" satisfied condition "Succeeded or Failed"
Sep  2 13:38:15.787: INFO: Trying to get logs from node ip-172-20-61-191.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-cmhw container test-container-subpath-inlinevolume-cmhw: <nil>
STEP: delete the pod
Sep  2 13:38:16.027: INFO: Waiting for pod pod-subpath-test-inlinevolume-cmhw to disappear
Sep  2 13:38:16.136: INFO: Pod pod-subpath-test-inlinevolume-cmhw no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-cmhw
Sep  2 13:38:16.136: INFO: Deleting pod "pod-subpath-test-inlinevolume-cmhw" in namespace "provisioning-3095"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:16.596: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 117 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:38:16.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":7,"skipped":75,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":12,"skipped":42,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:17.138: INFO: Only supported for providers [openstack] (not aws)
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":4,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 123 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1475
    should add annotations for pods in rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":12,"skipped":114,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:21.627: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 187 lines ...
• [SLOW TEST:38.133 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:01.028: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Sep  2 13:38:17.661: INFO: PersistentVolumeClaim pvc-89xtj found but phase is Pending instead of Bound.
Sep  2 13:38:19.771: INFO: PersistentVolumeClaim pvc-89xtj found and phase=Bound (12.77777791s)
Sep  2 13:38:19.772: INFO: Waiting up to 3m0s for PersistentVolume local-vckbh to have phase Bound
Sep  2 13:38:19.883: INFO: PersistentVolume local-vckbh found and phase=Bound (111.809924ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-zp8k
STEP: Creating a pod to test exec-volume-test
Sep  2 13:38:20.215: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-zp8k" in namespace "volume-7655" to be "Succeeded or Failed"
Sep  2 13:38:20.324: INFO: Pod "exec-volume-test-preprovisionedpv-zp8k": Phase="Pending", Reason="", readiness=false. Elapsed: 109.261951ms
Sep  2 13:38:22.435: INFO: Pod "exec-volume-test-preprovisionedpv-zp8k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220121374s
Sep  2 13:38:24.550: INFO: Pod "exec-volume-test-preprovisionedpv-zp8k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335060332s
Sep  2 13:38:26.661: INFO: Pod "exec-volume-test-preprovisionedpv-zp8k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446787496s
Sep  2 13:38:28.771: INFO: Pod "exec-volume-test-preprovisionedpv-zp8k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556555406s
Sep  2 13:38:30.883: INFO: Pod "exec-volume-test-preprovisionedpv-zp8k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.668065057s
STEP: Saw pod success
Sep  2 13:38:30.883: INFO: Pod "exec-volume-test-preprovisionedpv-zp8k" satisfied condition "Succeeded or Failed"
Sep  2 13:38:30.993: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-zp8k container exec-container-preprovisionedpv-zp8k: <nil>
STEP: delete the pod
Sep  2 13:38:31.220: INFO: Waiting for pod exec-volume-test-preprovisionedpv-zp8k to disappear
Sep  2 13:38:31.330: INFO: Pod exec-volume-test-preprovisionedpv-zp8k no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-zp8k
Sep  2 13:38:31.330: INFO: Deleting pod "exec-volume-test-preprovisionedpv-zp8k" in namespace "volume-7655"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":7,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:32.769: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-fbcd5374-3ef4-4e72-a7ea-ac87a59af8e4
STEP: Creating a pod to test consume configMaps
Sep  2 13:38:21.449: INFO: Waiting up to 5m0s for pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736" in namespace "configmap-6765" to be "Succeeded or Failed"
Sep  2 13:38:21.558: INFO: Pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736": Phase="Pending", Reason="", readiness=false. Elapsed: 108.630931ms
Sep  2 13:38:23.668: INFO: Pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218311315s
Sep  2 13:38:25.780: INFO: Pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330240783s
Sep  2 13:38:27.889: INFO: Pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439637085s
Sep  2 13:38:30.000: INFO: Pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550837037s
Sep  2 13:38:32.110: INFO: Pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.66062732s
STEP: Saw pod success
Sep  2 13:38:32.110: INFO: Pod "pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736" satisfied condition "Succeeded or Failed"
Sep  2 13:38:32.219: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736 container agnhost-container: <nil>
STEP: delete the pod
Sep  2 13:38:32.445: INFO: Waiting for pod pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736 to disappear
Sep  2 13:38:32.554: INFO: Pod pod-configmaps-abe71c84-6f54-44ad-9269-9ec2e55fd736 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:32.788: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 42 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:21.736: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Sep  2 13:38:22.289: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep  2 13:38:22.289: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-xlrf
STEP: Creating a pod to test subpath
Sep  2 13:38:22.408: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-xlrf" in namespace "provisioning-4185" to be "Succeeded or Failed"
Sep  2 13:38:22.516: INFO: Pod "pod-subpath-test-inlinevolume-xlrf": Phase="Pending", Reason="", readiness=false. Elapsed: 108.333942ms
Sep  2 13:38:24.627: INFO: Pod "pod-subpath-test-inlinevolume-xlrf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21932856s
Sep  2 13:38:26.741: INFO: Pod "pod-subpath-test-inlinevolume-xlrf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333103083s
Sep  2 13:38:28.852: INFO: Pod "pod-subpath-test-inlinevolume-xlrf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444006599s
Sep  2 13:38:30.961: INFO: Pod "pod-subpath-test-inlinevolume-xlrf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553581619s
Sep  2 13:38:33.070: INFO: Pod "pod-subpath-test-inlinevolume-xlrf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.662641227s
STEP: Saw pod success
Sep  2 13:38:33.070: INFO: Pod "pod-subpath-test-inlinevolume-xlrf" satisfied condition "Succeeded or Failed"
Sep  2 13:38:33.179: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-xlrf container test-container-subpath-inlinevolume-xlrf: <nil>
STEP: delete the pod
Sep  2 13:38:33.402: INFO: Waiting for pod pod-subpath-test-inlinevolume-xlrf to disappear
Sep  2 13:38:33.512: INFO: Pod pod-subpath-test-inlinevolume-xlrf no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-xlrf
Sep  2 13:38:33.512: INFO: Deleting pod "pod-subpath-test-inlinevolume-xlrf" in namespace "provisioning-4185"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":12,"skipped":101,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Sep  2 13:38:33.996: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:34.732: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":167,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:34.984: INFO: Only supported for providers [gce gke] (not aws)
... skipping 181 lines ...
Sep  2 13:35:50.815: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-2880
Sep  2 13:35:50.930: INFO: creating *v1.StatefulSet: csi-mock-volumes-2880-1545/csi-mockplugin-attacher
Sep  2 13:35:51.043: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-2880"
Sep  2 13:35:51.158: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2880 to register on node ip-172-20-42-46.eu-central-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Sep  2 13:36:01.059: INFO: Error getting logs for pod inline-volume-z7p6v: the server rejected our request for an unknown reason (get pods inline-volume-z7p6v)
Sep  2 13:36:01.198: INFO: Deleting pod "inline-volume-z7p6v" in namespace "csi-mock-volumes-2880"
Sep  2 13:36:01.317: INFO: Wait up to 5m0s for pod "inline-volume-z7p6v" to be fully deleted
STEP: Deleting the previously created pod
Sep  2 13:38:07.547: INFO: Deleting pod "pvc-volume-tester-6nxps" in namespace "csi-mock-volumes-2880"
Sep  2 13:38:07.658: INFO: Wait up to 5m0s for pod "pvc-volume-tester-6nxps" to be fully deleted
STEP: Checking CSI driver logs
Sep  2 13:38:13.990: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep  2 13:38:13.990: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Sep  2 13:38:13.990: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-6nxps
Sep  2 13:38:13.990: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2880
Sep  2 13:38:13.990: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 77fd7d38-a5f1-44ec-bfab-adb86fb19ead
Sep  2 13:38:13.990: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-8969422f61623cb949d98d797e78c5e84ab4d9c808cea54ed7e2a7965844a4a6","target_path":"/var/lib/kubelet/pods/77fd7d38-a5f1-44ec-bfab-adb86fb19ead/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-6nxps
Sep  2 13:38:13.990: INFO: Deleting pod "pvc-volume-tester-6nxps" in namespace "csi-mock-volumes-2880"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-2880
STEP: Waiting for namespaces [csi-mock-volumes-2880] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:22.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":8,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:44.689: INFO: Only supported for providers [gce gke] (not aws)
... skipping 132 lines ...
Sep  2 13:38:18.752: INFO: PersistentVolumeClaim pvc-xqngs found but phase is Pending instead of Bound.
Sep  2 13:38:20.863: INFO: PersistentVolumeClaim pvc-xqngs found and phase=Bound (8.549523351s)
Sep  2 13:38:20.863: INFO: Waiting up to 3m0s for PersistentVolume local-r7zt8 to have phase Bound
Sep  2 13:38:20.972: INFO: PersistentVolume local-r7zt8 found and phase=Bound (108.457738ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-t49z
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:38:21.299: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-t49z" in namespace "provisioning-4303" to be "Succeeded or Failed"
Sep  2 13:38:21.411: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Pending", Reason="", readiness=false. Elapsed: 111.614226ms
Sep  2 13:38:23.520: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220378463s
Sep  2 13:38:25.629: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 4.33025275s
Sep  2 13:38:27.738: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 6.438944361s
Sep  2 13:38:29.847: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 8.547913561s
Sep  2 13:38:31.958: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 10.658318941s
... skipping 3 lines ...
Sep  2 13:38:40.397: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 19.098183808s
Sep  2 13:38:42.507: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 21.207393762s
Sep  2 13:38:44.618: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 23.318713026s
Sep  2 13:38:46.727: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Running", Reason="", readiness=true. Elapsed: 25.427436596s
Sep  2 13:38:48.836: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.537202539s
STEP: Saw pod success
Sep  2 13:38:48.837: INFO: Pod "pod-subpath-test-preprovisionedpv-t49z" satisfied condition "Succeeded or Failed"
Sep  2 13:38:48.945: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-t49z container test-container-subpath-preprovisionedpv-t49z: <nil>
STEP: delete the pod
Sep  2 13:38:49.169: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-t49z to disappear
Sep  2 13:38:49.278: INFO: Pod pod-subpath-test-preprovisionedpv-t49z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-t49z
Sep  2 13:38:49.278: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-t49z" in namespace "provisioning-4303"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:50.805: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:37:39.487: INFO: >>> kubeConfig: /root/.kube/config
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:51.051: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 97 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":14,"skipped":116,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:11.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 74 lines ...
Sep  2 13:38:34.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Sep  2 13:38:34.699: INFO: Waiting up to 5m0s for pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f" in namespace "security-context-2052" to be "Succeeded or Failed"
Sep  2 13:38:34.807: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 107.810868ms
Sep  2 13:38:36.917: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217190751s
Sep  2 13:38:39.027: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327284425s
Sep  2 13:38:41.135: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435655979s
Sep  2 13:38:43.243: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543842142s
Sep  2 13:38:45.352: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652251841s
Sep  2 13:38:47.461: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.761861137s
Sep  2 13:38:49.570: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.870262859s
Sep  2 13:38:51.678: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.978638812s
Sep  2 13:38:53.787: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.087292171s
STEP: Saw pod success
Sep  2 13:38:53.787: INFO: Pod "security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f" satisfied condition "Succeeded or Failed"
Sep  2 13:38:53.900: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f container test-container: <nil>
STEP: delete the pod
Sep  2 13:38:54.121: INFO: Waiting for pod security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f to disappear
Sep  2 13:38:54.228: INFO: Pod security-context-87d2bb3e-775e-48f7-a2fc-93c6bf5eb21f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:20.431 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":13,"skipped":114,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:54.481: INFO: Driver "local" does not provide raw block - skipping
... skipping 115 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":81,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:57.059: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":98,"failed":0}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:46.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-3e04678a-9dd9-4309-8e49-27e44c0f0116
STEP: Creating a pod to test consume secrets
Sep  2 13:38:47.689: INFO: Waiting up to 5m0s for pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1" in namespace "secrets-4956" to be "Succeeded or Failed"
Sep  2 13:38:47.813: INFO: Pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1": Phase="Pending", Reason="", readiness=false. Elapsed: 124.58719ms
Sep  2 13:38:49.925: INFO: Pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235831883s
Sep  2 13:38:52.036: INFO: Pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347084975s
Sep  2 13:38:54.146: INFO: Pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456824612s
Sep  2 13:38:56.257: INFO: Pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.567810663s
Sep  2 13:38:58.367: INFO: Pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.678302607s
STEP: Saw pod success
Sep  2 13:38:58.367: INFO: Pod "pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1" satisfied condition "Succeeded or Failed"
Sep  2 13:38:58.477: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1 container secret-volume-test: <nil>
STEP: delete the pod
Sep  2 13:38:58.706: INFO: Waiting for pod pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1 to disappear
Sep  2 13:38:58.819: INFO: Pod pod-secrets-496c47ea-4d86-409b-917c-32bd617c16a1 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.124 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":98,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:59.050: INFO: >>> kubeConfig: /root/.kube/config
... skipping 118 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47
    should be mountable
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":13,"skipped":171,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:38:59.882: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 112 lines ...
• [SLOW TEST:39.211 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":13,"skipped":116,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:00.866: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-8cb70e4d-881a-4509-92e2-2980a3487ada
STEP: Creating a pod to test consume configMaps
Sep  2 13:38:55.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d" in namespace "configmap-4863" to be "Succeeded or Failed"
Sep  2 13:38:55.414: INFO: Pod "pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d": Phase="Pending", Reason="", readiness=false. Elapsed: 108.043754ms
Sep  2 13:38:57.523: INFO: Pod "pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216948549s
Sep  2 13:38:59.632: INFO: Pod "pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326322606s
Sep  2 13:39:01.744: INFO: Pod "pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437990862s
STEP: Saw pod success
Sep  2 13:39:01.744: INFO: Pod "pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d" satisfied condition "Succeeded or Failed"
Sep  2 13:39:01.852: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d container configmap-volume-test: <nil>
STEP: delete the pod
Sep  2 13:39:02.078: INFO: Waiting for pod pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d to disappear
Sep  2 13:39:02.186: INFO: Pod pod-configmaps-bb9b0d8c-b454-4c2b-8a2f-22d432c9ba8d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.862 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":136,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 60 lines ...
Sep  2 13:38:07.663: INFO: PersistentVolumeClaim csi-hostpathdjv5l found but phase is Pending instead of Bound.
Sep  2 13:38:09.772: INFO: PersistentVolumeClaim csi-hostpathdjv5l found but phase is Pending instead of Bound.
Sep  2 13:38:11.883: INFO: PersistentVolumeClaim csi-hostpathdjv5l found but phase is Pending instead of Bound.
Sep  2 13:38:13.993: INFO: PersistentVolumeClaim csi-hostpathdjv5l found and phase=Bound (6.439233404s)
STEP: Creating pod pod-subpath-test-dynamicpv-hzhm
STEP: Creating a pod to test subpath
Sep  2 13:38:14.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hzhm" in namespace "provisioning-6619" to be "Succeeded or Failed"
Sep  2 13:38:14.433: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 109.304887ms
Sep  2 13:38:16.545: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220525044s
Sep  2 13:38:18.657: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332795446s
Sep  2 13:38:20.773: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448584467s
Sep  2 13:38:22.883: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.558485094s
Sep  2 13:38:24.993: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668378177s
... skipping 2 lines ...
Sep  2 13:38:31.326: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 17.002351109s
Sep  2 13:38:33.436: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 19.112065293s
Sep  2 13:38:35.550: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 21.225657093s
Sep  2 13:38:37.662: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Pending", Reason="", readiness=false. Elapsed: 23.337656616s
Sep  2 13:38:39.775: INFO: Pod "pod-subpath-test-dynamicpv-hzhm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.450713777s
STEP: Saw pod success
Sep  2 13:38:39.775: INFO: Pod "pod-subpath-test-dynamicpv-hzhm" satisfied condition "Succeeded or Failed"
Sep  2 13:38:39.886: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-hzhm container test-container-subpath-dynamicpv-hzhm: <nil>
STEP: delete the pod
Sep  2 13:38:40.139: INFO: Waiting for pod pod-subpath-test-dynamicpv-hzhm to disappear
Sep  2 13:38:40.248: INFO: Pod pod-subpath-test-dynamicpv-hzhm no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-hzhm
Sep  2 13:38:40.248: INFO: Deleting pod "pod-subpath-test-dynamicpv-hzhm" in namespace "provisioning-6619"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":78,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 60 lines ...
Sep  2 13:37:28.194: INFO: PersistentVolumeClaim csi-hostpathfgqr9 found but phase is Pending instead of Bound.
Sep  2 13:37:30.324: INFO: PersistentVolumeClaim csi-hostpathfgqr9 found but phase is Pending instead of Bound.
Sep  2 13:37:32.441: INFO: PersistentVolumeClaim csi-hostpathfgqr9 found but phase is Pending instead of Bound.
Sep  2 13:37:34.551: INFO: PersistentVolumeClaim csi-hostpathfgqr9 found and phase=Bound (6.466655751s)
STEP: Creating pod pod-subpath-test-dynamicpv-5hnv
STEP: Creating a pod to test subpath
Sep  2 13:37:34.898: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5hnv" in namespace "provisioning-8104" to be "Succeeded or Failed"
Sep  2 13:37:35.014: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 115.467588ms
Sep  2 13:37:37.124: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225449465s
Sep  2 13:37:39.234: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335604419s
Sep  2 13:37:41.343: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445333087s
Sep  2 13:37:43.453: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55483459s
Sep  2 13:37:45.564: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665676042s
Sep  2 13:37:47.674: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.775573949s
Sep  2 13:37:49.784: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.885597366s
Sep  2 13:37:51.894: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.995758243s
Sep  2 13:37:54.008: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 19.109848559s
Sep  2 13:37:56.118: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.220009679s
STEP: Saw pod success
Sep  2 13:37:56.118: INFO: Pod "pod-subpath-test-dynamicpv-5hnv" satisfied condition "Succeeded or Failed"
Sep  2 13:37:56.228: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-5hnv container test-container-subpath-dynamicpv-5hnv: <nil>
STEP: delete the pod
Sep  2 13:37:56.452: INFO: Waiting for pod pod-subpath-test-dynamicpv-5hnv to disappear
Sep  2 13:37:56.561: INFO: Pod pod-subpath-test-dynamicpv-5hnv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-5hnv
Sep  2 13:37:56.561: INFO: Deleting pod "pod-subpath-test-dynamicpv-5hnv" in namespace "provisioning-8104"
STEP: Creating pod pod-subpath-test-dynamicpv-5hnv
STEP: Creating a pod to test subpath
Sep  2 13:37:56.792: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-5hnv" in namespace "provisioning-8104" to be "Succeeded or Failed"
Sep  2 13:37:56.908: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 115.275591ms
Sep  2 13:37:59.018: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225043638s
Sep  2 13:38:01.128: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335235275s
Sep  2 13:38:03.240: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447318878s
Sep  2 13:38:05.349: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556976754s
Sep  2 13:38:07.521: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.729023852s
... skipping 3 lines ...
Sep  2 13:38:15.967: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 19.174138009s
Sep  2 13:38:18.078: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 21.285934365s
Sep  2 13:38:20.188: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 23.395851878s
Sep  2 13:38:22.299: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Pending", Reason="", readiness=false. Elapsed: 25.506545873s
Sep  2 13:38:24.418: INFO: Pod "pod-subpath-test-dynamicpv-5hnv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.6254017s
STEP: Saw pod success
Sep  2 13:38:24.418: INFO: Pod "pod-subpath-test-dynamicpv-5hnv" satisfied condition "Succeeded or Failed"
Sep  2 13:38:24.533: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-5hnv container test-container-subpath-dynamicpv-5hnv: <nil>
STEP: delete the pod
Sep  2 13:38:24.782: INFO: Waiting for pod pod-subpath-test-dynamicpv-5hnv to disappear
Sep  2 13:38:24.892: INFO: Pod pod-subpath-test-dynamicpv-5hnv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-5hnv
Sep  2 13:38:24.892: INFO: Deleting pod "pod-subpath-test-dynamicpv-5hnv" in namespace "provisioning-8104"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:04.570: INFO: Only supported for providers [openstack] (not aws)
... skipping 89 lines ...
• [SLOW TEST:5.072 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":14,"skipped":175,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
• [SLOW TEST:5.686 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":20,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":13,"skipped":114,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:06.401: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull from private registry with secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":14,"skipped":132,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-772x
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:38:36.461: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-772x" in namespace "subpath-3509" to be "Succeeded or Failed"
Sep  2 13:38:36.569: INFO: Pod "pod-subpath-test-projected-772x": Phase="Pending", Reason="", readiness=false. Elapsed: 107.968164ms
Sep  2 13:38:38.679: INFO: Pod "pod-subpath-test-projected-772x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218266367s
Sep  2 13:38:40.787: INFO: Pod "pod-subpath-test-projected-772x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326388058s
Sep  2 13:38:42.898: INFO: Pod "pod-subpath-test-projected-772x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437076632s
Sep  2 13:38:45.007: INFO: Pod "pod-subpath-test-projected-772x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546190626s
Sep  2 13:38:47.116: INFO: Pod "pod-subpath-test-projected-772x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655457394s
... skipping 4 lines ...
Sep  2 13:38:57.664: INFO: Pod "pod-subpath-test-projected-772x": Phase="Running", Reason="", readiness=true. Elapsed: 21.203508462s
Sep  2 13:38:59.773: INFO: Pod "pod-subpath-test-projected-772x": Phase="Running", Reason="", readiness=true. Elapsed: 23.312279294s
Sep  2 13:39:01.882: INFO: Pod "pod-subpath-test-projected-772x": Phase="Running", Reason="", readiness=true. Elapsed: 25.421350617s
Sep  2 13:39:03.992: INFO: Pod "pod-subpath-test-projected-772x": Phase="Running", Reason="", readiness=true. Elapsed: 27.531525804s
Sep  2 13:39:06.110: INFO: Pod "pod-subpath-test-projected-772x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.649786559s
STEP: Saw pod success
Sep  2 13:39:06.111: INFO: Pod "pod-subpath-test-projected-772x" satisfied condition "Succeeded or Failed"
Sep  2 13:39:06.222: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-projected-772x container test-container-subpath-projected-772x: <nil>
STEP: delete the pod
Sep  2 13:39:06.445: INFO: Waiting for pod pod-subpath-test-projected-772x to disappear
Sep  2 13:39:06.553: INFO: Pod pod-subpath-test-projected-772x no longer exists
STEP: Deleting pod pod-subpath-test-projected-772x
Sep  2 13:39:06.553: INFO: Deleting pod "pod-subpath-test-projected-772x" in namespace "subpath-3509"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":103,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:06.897: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 38 lines ...
• [SLOW TEST:7.347 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":14,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:13.798: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
Sep  2 13:39:14.609: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.771 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 19 lines ...
• [SLOW TEST:9.848 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":15,"skipped":176,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:14.848: INFO: Only supported for providers [gce gke] (not aws)
... skipping 229 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":35,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 282 lines ...
• [SLOW TEST:12.807 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should release NodePorts on delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1582
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":9,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:39:06.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-e1def2b9-8862-4e1c-9a5e-e491a9829290
STEP: Creating a pod to test consume configMaps
Sep  2 13:39:07.609: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a" in namespace "projected-7194" to be "Succeeded or Failed"
Sep  2 13:39:07.720: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Pending", Reason="", readiness=false. Elapsed: 110.352325ms
Sep  2 13:39:09.833: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223937539s
Sep  2 13:39:11.944: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334610005s
Sep  2 13:39:14.054: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444264433s
Sep  2 13:39:16.171: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561624903s
Sep  2 13:39:18.283: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.674194034s
Sep  2 13:39:20.393: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.78382984s
Sep  2 13:39:22.503: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.894133086s
STEP: Saw pod success
Sep  2 13:39:22.503: INFO: Pod "pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a" satisfied condition "Succeeded or Failed"
Sep  2 13:39:22.614: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a container agnhost-container: <nil>
STEP: delete the pod
Sep  2 13:39:22.841: INFO: Waiting for pod pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a to disappear
Sep  2 13:39:22.952: INFO: Pod pod-projected-configmaps-329c23d5-aedc-4e68-a3ca-318e0afbe31a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.348 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":15,"skipped":137,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:23.206: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-v8fx
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:38:51.718: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-v8fx" in namespace "subpath-8182" to be "Succeeded or Failed"
Sep  2 13:38:51.827: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Pending", Reason="", readiness=false. Elapsed: 109.503317ms
Sep  2 13:38:53.937: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218967482s
Sep  2 13:38:56.046: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32843289s
Sep  2 13:38:58.156: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438245124s
Sep  2 13:39:00.266: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Running", Reason="", readiness=true. Elapsed: 8.548322277s
Sep  2 13:39:02.375: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Running", Reason="", readiness=true. Elapsed: 10.657528767s
... skipping 5 lines ...
Sep  2 13:39:15.039: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Running", Reason="", readiness=true. Elapsed: 23.32079517s
Sep  2 13:39:17.148: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Running", Reason="", readiness=true. Elapsed: 25.430291022s
Sep  2 13:39:19.258: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Running", Reason="", readiness=true. Elapsed: 27.540169667s
Sep  2 13:39:21.368: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Running", Reason="", readiness=true. Elapsed: 29.650300805s
Sep  2 13:39:23.478: INFO: Pod "pod-subpath-test-downwardapi-v8fx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.759768336s
STEP: Saw pod success
Sep  2 13:39:23.478: INFO: Pod "pod-subpath-test-downwardapi-v8fx" satisfied condition "Succeeded or Failed"
Sep  2 13:39:23.586: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-downwardapi-v8fx container test-container-subpath-downwardapi-v8fx: <nil>
STEP: delete the pod
Sep  2 13:39:23.809: INFO: Waiting for pod pod-subpath-test-downwardapi-v8fx to disappear
Sep  2 13:39:23.917: INFO: Pod pod-subpath-test-downwardapi-v8fx no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-v8fx
Sep  2 13:39:23.917: INFO: Deleting pod "pod-subpath-test-downwardapi-v8fx" in namespace "subpath-8182"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":71,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-node] kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 127 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":6,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 154 lines ...
Sep  2 13:37:51.818: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-8824
Sep  2 13:37:51.928: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8824
Sep  2 13:37:52.041: INFO: creating *v1.StatefulSet: csi-mock-volumes-8824-5358/csi-mockplugin
Sep  2 13:37:52.153: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-8824
Sep  2 13:37:52.264: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-8824"
Sep  2 13:37:52.374: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-8824 to register on node ip-172-20-45-138.eu-central-1.compute.internal
I0902 13:38:03.166037    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0902 13:38:03.279049    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8824","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0902 13:38:03.393415    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I0902 13:38:03.502551    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0902 13:38:03.731037    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-8824","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0902 13:38:04.395646    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-8824","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Sep  2 13:38:09.403: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0902 13:38:09.699424    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0902 13:38:12.469362    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I0902 13:38:14.410044    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0902 13:38:14.519277    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep  2 13:38:14.627: INFO: >>> kubeConfig: /root/.kube/config
I0902 13:38:15.356164    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4","storage.kubernetes.io/csiProvisionerIdentity":"1630589883553-8081-csi-mock-csi-mock-volumes-8824"}},"Response":{},"Error":"","FullError":null}
I0902 13:38:15.661740    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0902 13:38:15.772212    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Sep  2 13:38:15.880: INFO: >>> kubeConfig: /root/.kube/config
Sep  2 13:38:16.599: INFO: >>> kubeConfig: /root/.kube/config
Sep  2 13:38:17.319: INFO: >>> kubeConfig: /root/.kube/config
I0902 13:38:18.080038    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4/globalmount","target_path":"/var/lib/kubelet/pods/0ad842a5-e95c-4ed6-b576-329d5e4a8a45/volumes/kubernetes.io~csi/pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4","storage.kubernetes.io/csiProvisionerIdentity":"1630589883553-8081-csi-mock-csi-mock-volumes-8824"}},"Response":{},"Error":"","FullError":null}
Sep  2 13:38:19.895: INFO: Deleting pod "pvc-volume-tester-hkxgq" in namespace "csi-mock-volumes-8824"
Sep  2 13:38:20.005: INFO: Wait up to 5m0s for pod "pvc-volume-tester-hkxgq" to be fully deleted
Sep  2 13:38:21.789: INFO: >>> kubeConfig: /root/.kube/config
I0902 13:38:22.556362    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0ad842a5-e95c-4ed6-b576-329d5e4a8a45/volumes/kubernetes.io~csi/pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4/mount"},"Response":{},"Error":"","FullError":null}
I0902 13:38:22.691728    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0902 13:38:22.801908    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4/globalmount"},"Response":{},"Error":"","FullError":null}
I0902 13:38:24.358902    4813 csi.go:432] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Sep  2 13:38:25.341: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k85qv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8824", SelfLink:"", UID:"4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4", ResourceVersion:"12309", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186689, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002da0e40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002da0e58), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00310a4a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00310a4b0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:38:25.342: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k85qv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8824", SelfLink:"", UID:"4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4", ResourceVersion:"12315", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186689, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-45-138.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00339ea08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00339ea20), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00339ea38), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00339ea50), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002e775f0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002e77600), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:38:25.342: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k85qv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8824", SelfLink:"", UID:"4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4", ResourceVersion:"12316", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186689, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8824", "volume.kubernetes.io/selected-node":"ip-172-20-45-138.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e12060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e12078), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e12090), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e120a8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e120c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e120d8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002fae060), VolumeMode:(*v1.PersistentVolumeMode)(0xc002fae070), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:38:25.342: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k85qv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8824", SelfLink:"", UID:"4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4", ResourceVersion:"12327", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186689, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8824"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e120f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e12108), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e12120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e12138), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e12150), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e12168), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002fae0d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002fae0e0), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Sep  2 13:38:25.342: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-k85qv", GenerateName:"pvc-", Namespace:"csi-mock-volumes-8824", SelfLink:"", UID:"4ada7bb6-cd5d-4b01-880e-4fe02c93ecc4", ResourceVersion:"12430", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766186689, loc:(*time.Location)(0xa5cc7a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-8824", "volume.kubernetes.io/selected-node":"ip-172-20-45-138.eu-central-1.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e12198), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e121b0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e121c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e121e0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e121f8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e12210), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002fae110), VolumeMode:(*v1.PersistentVolumeMode)(0xc002fae120), DataSource:(*v1.TypedLocalObjectReference)(nil), DataSourceRef:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1022
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1080
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":7,"skipped":104,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:27.572: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 149 lines ...
• [SLOW TEST:23.474 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":13,"skipped":109,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:30.410: INFO: Only supported for providers [azure] (not aws)
... skipping 48 lines ...
Sep  2 13:39:24.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
Sep  2 13:39:24.877: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  2 13:39:25.100: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7833" in namespace "provisioning-7833" to be "Succeeded or Failed"
Sep  2 13:39:25.210: INFO: Pod "hostpath-symlink-prep-provisioning-7833": Phase="Pending", Reason="", readiness=false. Elapsed: 109.542117ms
Sep  2 13:39:27.328: INFO: Pod "hostpath-symlink-prep-provisioning-7833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.228382456s
STEP: Saw pod success
Sep  2 13:39:27.329: INFO: Pod "hostpath-symlink-prep-provisioning-7833" satisfied condition "Succeeded or Failed"
Sep  2 13:39:27.329: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7833" in namespace "provisioning-7833"
Sep  2 13:39:27.451: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7833" to be fully deleted
Sep  2 13:39:27.589: INFO: Creating resource for inline volume
Sep  2 13:39:27.589: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Sep  2 13:39:27.590: INFO: Deleting pod "pod-subpath-test-inlinevolume-78rp" in namespace "provisioning-7833"
Sep  2 13:39:27.809: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7833" in namespace "provisioning-7833" to be "Succeeded or Failed"
Sep  2 13:39:27.923: INFO: Pod "hostpath-symlink-prep-provisioning-7833": Phase="Pending", Reason="", readiness=false. Elapsed: 114.466436ms
Sep  2 13:39:30.035: INFO: Pod "hostpath-symlink-prep-provisioning-7833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.226388144s
STEP: Saw pod success
Sep  2 13:39:30.035: INFO: Pod "hostpath-symlink-prep-provisioning-7833" satisfied condition "Succeeded or Failed"
Sep  2 13:39:30.035: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7833" in namespace "provisioning-7833"
Sep  2 13:39:30.150: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7833" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:39:30.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7833" for this suite.
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:39:19.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d" in namespace "downward-api-9939" to be "Succeeded or Failed"
Sep  2 13:39:19.869: INFO: Pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d": Phase="Pending", Reason="", readiness=false. Elapsed: 112.186477ms
Sep  2 13:39:21.979: INFO: Pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22239426s
Sep  2 13:39:24.089: INFO: Pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332319485s
Sep  2 13:39:26.199: INFO: Pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442374237s
Sep  2 13:39:28.310: INFO: Pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553412533s
Sep  2 13:39:30.420: INFO: Pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.66370573s
STEP: Saw pod success
Sep  2 13:39:30.420: INFO: Pod "downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d" satisfied condition "Succeeded or Failed"
Sep  2 13:39:30.529: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d container client-container: <nil>
STEP: delete the pod
Sep  2 13:39:30.760: INFO: Waiting for pod downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d to disappear
Sep  2 13:39:30.870: INFO: Pod downwardapi-volume-5cf12669-8ded-47b0-98ab-cd909ef0eb7d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.008 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":104,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:31.120: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
Sep  2 13:38:51.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Sep  2 13:38:51.687: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep  2 13:38:51.909: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-7899" in namespace "volume-7899" to be "Succeeded or Failed"
Sep  2 13:38:52.024: INFO: Pod "hostpath-symlink-prep-volume-7899": Phase="Pending", Reason="", readiness=false. Elapsed: 115.346527ms
Sep  2 13:38:54.134: INFO: Pod "hostpath-symlink-prep-volume-7899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.225386928s
STEP: Saw pod success
Sep  2 13:38:54.134: INFO: Pod "hostpath-symlink-prep-volume-7899" satisfied condition "Succeeded or Failed"
Sep  2 13:38:54.134: INFO: Deleting pod "hostpath-symlink-prep-volume-7899" in namespace "volume-7899"
Sep  2 13:38:54.248: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-7899" to be fully deleted
Sep  2 13:38:54.357: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Sep  2 13:38:56.686: INFO: Running '/tmp/kubectl2675029652/kubectl --server=https://api.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-7899 exec hostpathsymlink-injector --namespace=volume-7899 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-7899' > /opt/0/index.html'
... skipping 36 lines ...
Sep  2 13:39:20.133: INFO: Pod hostpathsymlink-client still exists
Sep  2 13:39:22.023: INFO: Waiting for pod hostpathsymlink-client to disappear
Sep  2 13:39:22.133: INFO: Pod hostpathsymlink-client still exists
Sep  2 13:39:24.023: INFO: Waiting for pod hostpathsymlink-client to disappear
Sep  2 13:39:24.132: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Sep  2 13:39:24.250: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-7899" in namespace "volume-7899" to be "Succeeded or Failed"
Sep  2 13:39:24.361: INFO: Pod "hostpath-symlink-prep-volume-7899": Phase="Pending", Reason="", readiness=false. Elapsed: 111.253984ms
Sep  2 13:39:26.472: INFO: Pod "hostpath-symlink-prep-volume-7899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221612982s
Sep  2 13:39:28.582: INFO: Pod "hostpath-symlink-prep-volume-7899": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331678456s
Sep  2 13:39:30.691: INFO: Pod "hostpath-symlink-prep-volume-7899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441190572s
STEP: Saw pod success
Sep  2 13:39:30.691: INFO: Pod "hostpath-symlink-prep-volume-7899" satisfied condition "Succeeded or Failed"
Sep  2 13:39:30.691: INFO: Deleting pod "hostpath-symlink-prep-volume-7899" in namespace "volume-7899"
Sep  2 13:39:30.806: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-7899" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:39:30.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-7899" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:31.152: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 114 lines ...
      Driver "aws" does not support cloning - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:241
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File","total":-1,"completed":13,"skipped":51,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:39:24.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:7.791 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":14,"skipped":51,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:32.167: INFO: Only supported for providers [gce gke] (not aws)
... skipping 49 lines ...
Sep  2 13:39:17.168: INFO: PersistentVolumeClaim pvc-d6lmq found but phase is Pending instead of Bound.
Sep  2 13:39:19.278: INFO: PersistentVolumeClaim pvc-d6lmq found and phase=Bound (14.907717113s)
Sep  2 13:39:19.278: INFO: Waiting up to 3m0s for PersistentVolume local-7l5cb to have phase Bound
Sep  2 13:39:19.388: INFO: PersistentVolume local-7l5cb found and phase=Bound (109.827918ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-2245
STEP: Creating a pod to test exec-volume-test
Sep  2 13:39:19.723: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2245" in namespace "volume-9800" to be "Succeeded or Failed"
Sep  2 13:39:19.836: INFO: Pod "exec-volume-test-preprovisionedpv-2245": Phase="Pending", Reason="", readiness=false. Elapsed: 113.516153ms
Sep  2 13:39:21.948: INFO: Pod "exec-volume-test-preprovisionedpv-2245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224899824s
Sep  2 13:39:24.067: INFO: Pod "exec-volume-test-preprovisionedpv-2245": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344004073s
Sep  2 13:39:26.178: INFO: Pod "exec-volume-test-preprovisionedpv-2245": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455066098s
Sep  2 13:39:28.292: INFO: Pod "exec-volume-test-preprovisionedpv-2245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.569008265s
STEP: Saw pod success
Sep  2 13:39:28.292: INFO: Pod "exec-volume-test-preprovisionedpv-2245" satisfied condition "Succeeded or Failed"
Sep  2 13:39:28.403: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-2245 container exec-container-preprovisionedpv-2245: <nil>
STEP: delete the pod
Sep  2 13:39:28.633: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2245 to disappear
Sep  2 13:39:28.751: INFO: Pod exec-volume-test-preprovisionedpv-2245 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-2245
Sep  2 13:39:28.751: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2245" in namespace "volume-9800"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":76,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:32.549: INFO: Only supported for providers [vsphere] (not aws)
... skipping 144 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:34.376: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 132 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-49dbf8c5-e001-4dba-9d89-c8f69a942087
STEP: Creating a pod to test consume configMaps
Sep  2 13:39:28.464: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb" in namespace "configmap-1379" to be "Succeeded or Failed"
Sep  2 13:39:28.574: INFO: Pod "pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb": Phase="Pending", Reason="", readiness=false. Elapsed: 109.853633ms
Sep  2 13:39:30.685: INFO: Pod "pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220367132s
Sep  2 13:39:32.797: INFO: Pod "pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332969971s
Sep  2 13:39:34.908: INFO: Pod "pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.443899581s
STEP: Saw pod success
Sep  2 13:39:34.908: INFO: Pod "pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb" satisfied condition "Succeeded or Failed"
Sep  2 13:39:35.018: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb container agnhost-container: <nil>
STEP: delete the pod
Sep  2 13:39:35.256: INFO: Waiting for pod pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb to disappear
Sep  2 13:39:35.366: INFO: Pod pod-configmaps-a8af80fd-0ec2-47ed-a885-d61a4f3801cb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.924 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":125,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:35.597: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 172 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":4,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 96 lines ...
• [SLOW TEST:7.581 seconds]
[sig-node] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":4,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:38.825: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 53 lines ...
• [SLOW TEST:7.225 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":88,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 10 lines ...
Sep  2 13:38:08.111: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-4529hhrw4      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4529    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4529hhrw4,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4529    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4529hhrw4,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-4529    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4529hhrw4,VolumeMode:nil,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-4529hhrw4    3c6780ce-f752-4447-a8a8-e2ff42fabbdd 12250 0 2021-09-02 13:38:08 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-09-02 13:38:08 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-cxfgj pvc- provisioning-4529  239997f1-4c1b-41bc-9925-1fe4e34dd598 12273 0 2021-09-02 13:38:08 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-09-02 13:38:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-4529hhrw4,VolumeMode:*Filesystem,DataSource:nil,DataSourceRef:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-7cdd90ee-30f6-4565-b476-16725d3b596d in namespace provisioning-4529
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Sep  2 13:38:33.336: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-xq8f8" in namespace "provisioning-4529" to be "Succeeded or Failed"
Sep  2 13:38:33.444: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 107.952978ms
Sep  2 13:38:35.552: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21643791s
Sep  2 13:38:37.663: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326999632s
Sep  2 13:38:39.774: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438664483s
Sep  2 13:38:41.884: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547962946s
Sep  2 13:38:43.992: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656470307s
Sep  2 13:38:46.102: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.765871134s
Sep  2 13:38:48.211: INFO: Pod "pvc-volume-tester-writer-xq8f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.87551529s
STEP: Saw pod success
Sep  2 13:38:48.211: INFO: Pod "pvc-volume-tester-writer-xq8f8" satisfied condition "Succeeded or Failed"
Sep  2 13:38:48.435: INFO: Pod pvc-volume-tester-writer-xq8f8 has the following logs: 
Sep  2 13:38:48.435: INFO: Deleting pod "pvc-volume-tester-writer-xq8f8" in namespace "provisioning-4529"
Sep  2 13:38:48.549: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-xq8f8" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-49-181.eu-central-1.compute.internal"
Sep  2 13:38:48.988: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-qnrxv" in namespace "provisioning-4529" to be "Succeeded or Failed"
Sep  2 13:38:49.096: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 108.149405ms
Sep  2 13:38:51.205: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217172852s
Sep  2 13:38:53.318: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329668841s
Sep  2 13:38:55.433: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444251965s
Sep  2 13:38:57.542: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553587701s
Sep  2 13:38:59.651: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662413345s
... skipping 4 lines ...
Sep  2 13:39:10.205: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 21.217207752s
Sep  2 13:39:12.315: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 23.326689549s
Sep  2 13:39:14.424: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 25.435750173s
Sep  2 13:39:16.534: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Pending", Reason="", readiness=false. Elapsed: 27.545384185s
Sep  2 13:39:18.643: INFO: Pod "pvc-volume-tester-reader-qnrxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.654487144s
STEP: Saw pod success
Sep  2 13:39:18.643: INFO: Pod "pvc-volume-tester-reader-qnrxv" satisfied condition "Succeeded or Failed"
Sep  2 13:39:18.863: INFO: Pod pvc-volume-tester-reader-qnrxv has the following logs: hello world

Sep  2 13:39:18.863: INFO: Deleting pod "pvc-volume-tester-reader-qnrxv" in namespace "provisioning-4529"
Sep  2 13:39:18.987: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-qnrxv" to be fully deleted
Sep  2 13:39:19.095: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cxfgj] to have phase Bound
Sep  2 13:39:19.204: INFO: PersistentVolumeClaim pvc-cxfgj found and phase=Bound (108.978716ms)
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":12,"skipped":69,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":10,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":15,"skipped":128,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:42.182: INFO: Only supported for providers [gce gke] (not aws)
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:39:31.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737" in namespace "projected-7606" to be "Succeeded or Failed"
Sep  2 13:39:31.907: INFO: Pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737": Phase="Pending", Reason="", readiness=false. Elapsed: 115.719256ms
Sep  2 13:39:34.016: INFO: Pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22527618s
Sep  2 13:39:36.131: INFO: Pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340152971s
Sep  2 13:39:38.258: INFO: Pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466698772s
Sep  2 13:39:40.367: INFO: Pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576099056s
Sep  2 13:39:42.478: INFO: Pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.686576516s
STEP: Saw pod success
Sep  2 13:39:42.478: INFO: Pod "downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737" satisfied condition "Succeeded or Failed"
Sep  2 13:39:42.587: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737 container client-container: <nil>
STEP: delete the pod
Sep  2 13:39:42.813: INFO: Waiting for pod downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737 to disappear
Sep  2 13:39:42.922: INFO: Pod downwardapi-volume-9c4c0233-0f34-40fb-abc4-5bcf06622737 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":108,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:43.163: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:43.164: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 91 lines ...
• [SLOW TEST:29.362 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":15,"skipped":138,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:44.005: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
Sep  2 13:38:33.388: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-4684dh9kk
STEP: creating a claim
Sep  2 13:38:33.498: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-h424
STEP: Creating a pod to test atomic-volume-subpath
Sep  2 13:38:33.840: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-h424" in namespace "provisioning-4684" to be "Succeeded or Failed"
Sep  2 13:38:33.952: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Pending", Reason="", readiness=false. Elapsed: 112.618623ms
Sep  2 13:38:36.063: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223462078s
Sep  2 13:38:38.174: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334273741s
Sep  2 13:38:40.291: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451548501s
Sep  2 13:38:42.412: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Pending", Reason="", readiness=false. Elapsed: 8.572386899s
Sep  2 13:38:44.522: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Pending", Reason="", readiness=false. Elapsed: 10.682320499s
... skipping 13 lines ...
Sep  2 13:39:14.076: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Running", Reason="", readiness=true. Elapsed: 40.236476243s
Sep  2 13:39:16.187: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Running", Reason="", readiness=true. Elapsed: 42.347544588s
Sep  2 13:39:18.299: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Running", Reason="", readiness=true. Elapsed: 44.459357802s
Sep  2 13:39:20.410: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Running", Reason="", readiness=true. Elapsed: 46.570278404s
Sep  2 13:39:22.521: INFO: Pod "pod-subpath-test-dynamicpv-h424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.681454835s
STEP: Saw pod success
Sep  2 13:39:22.521: INFO: Pod "pod-subpath-test-dynamicpv-h424" satisfied condition "Succeeded or Failed"
Sep  2 13:39:22.631: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-h424 container test-container-subpath-dynamicpv-h424: <nil>
STEP: delete the pod
Sep  2 13:39:22.856: INFO: Waiting for pod pod-subpath-test-dynamicpv-h424 to disappear
Sep  2 13:39:22.967: INFO: Pod pod-subpath-test-dynamicpv-h424 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-h424
Sep  2 13:39:22.967: INFO: Deleting pod "pod-subpath-test-dynamicpv-h424" in namespace "provisioning-4684"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:44.452: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:38:36.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 43 lines ...
• [SLOW TEST:69.244 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":5,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Sep  2 13:39:36.289: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1705" to be "Succeeded or Failed"
Sep  2 13:39:36.407: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 117.590369ms
Sep  2 13:39:38.517: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228267934s
Sep  2 13:39:40.629: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340120311s
Sep  2 13:39:42.748: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.458559327s
Sep  2 13:39:44.858: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.569232063s
STEP: Saw pod success
Sep  2 13:39:44.858: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep  2 13:39:44.969: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep  2 13:39:45.202: INFO: Waiting for pod pod-host-path-test to disappear
Sep  2 13:39:45.323: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":9,"skipped":129,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:45.619: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 143 lines ...
Sep  2 13:39:32.301: INFO: PersistentVolumeClaim pvc-rczf8 found but phase is Pending instead of Bound.
Sep  2 13:39:34.410: INFO: PersistentVolumeClaim pvc-rczf8 found and phase=Bound (8.553772013s)
Sep  2 13:39:34.410: INFO: Waiting up to 3m0s for PersistentVolume local-p5gms to have phase Bound
Sep  2 13:39:34.517: INFO: PersistentVolume local-p5gms found and phase=Bound (107.05243ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-stwz
STEP: Creating a pod to test subpath
Sep  2 13:39:34.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-stwz" in namespace "provisioning-1654" to be "Succeeded or Failed"
Sep  2 13:39:34.949: INFO: Pod "pod-subpath-test-preprovisionedpv-stwz": Phase="Pending", Reason="", readiness=false. Elapsed: 107.171578ms
Sep  2 13:39:37.058: INFO: Pod "pod-subpath-test-preprovisionedpv-stwz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215887518s
Sep  2 13:39:39.165: INFO: Pod "pod-subpath-test-preprovisionedpv-stwz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323806245s
Sep  2 13:39:41.274: INFO: Pod "pod-subpath-test-preprovisionedpv-stwz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432790797s
Sep  2 13:39:43.382: INFO: Pod "pod-subpath-test-preprovisionedpv-stwz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.540833663s
STEP: Saw pod success
Sep  2 13:39:43.383: INFO: Pod "pod-subpath-test-preprovisionedpv-stwz" satisfied condition "Succeeded or Failed"
Sep  2 13:39:43.490: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-stwz container test-container-volume-preprovisionedpv-stwz: <nil>
STEP: delete the pod
Sep  2 13:39:43.711: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-stwz to disappear
Sep  2 13:39:43.818: INFO: Pod pod-subpath-test-preprovisionedpv-stwz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-stwz
Sep  2 13:39:43.818: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-stwz" in namespace "provisioning-1654"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":16,"skipped":187,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:46.946: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 367 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when the NodeLease feature is enabled
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49
    the kubelet should report node status infrequently
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":9,"skipped":92,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:49.116: INFO: Only supported for providers [azure] (not aws)
... skipping 117 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:39:39.503: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d" in namespace "downward-api-4293" to be "Succeeded or Failed"
Sep  2 13:39:39.612: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d": Phase="Pending", Reason="", readiness=false. Elapsed: 108.995571ms
Sep  2 13:39:41.722: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218888874s
Sep  2 13:39:43.836: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333386002s
Sep  2 13:39:45.946: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443729947s
Sep  2 13:39:48.055: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552754456s
Sep  2 13:39:50.165: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662379558s
Sep  2 13:39:52.274: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.771577485s
STEP: Saw pod success
Sep  2 13:39:52.274: INFO: Pod "downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d" satisfied condition "Succeeded or Failed"
Sep  2 13:39:52.384: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d container client-container: <nil>
STEP: delete the pod
Sep  2 13:39:52.607: INFO: Waiting for pod downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d to disappear
Sep  2 13:39:52.721: INFO: Pod downwardapi-volume-95b89e6b-5899-4a6b-8be0-6819207bd36d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.100 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:52.958: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 48 lines ...
• [SLOW TEST:10.777 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":11,"skipped":122,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
• [SLOW TEST:12.883 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":16,"skipped":143,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:55.177: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver emptydir doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":6,"skipped":56,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:39:48.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
• [SLOW TEST:8.689 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:95
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":7,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:57.365: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 113 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:505
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":10,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:57.900: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 50 lines ...
• [SLOW TEST:20.508 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep  2 13:39:47.073: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9" in namespace "projected-1244" to be "Succeeded or Failed"
Sep  2 13:39:47.183: INFO: Pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.956173ms
Sep  2 13:39:49.294: INFO: Pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220350221s
Sep  2 13:39:51.405: INFO: Pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331731454s
Sep  2 13:39:53.516: INFO: Pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9": Phase="Running", Reason="", readiness=true. Elapsed: 6.442501917s
Sep  2 13:39:55.626: INFO: Pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9": Phase="Running", Reason="", readiness=true. Elapsed: 8.552548194s
Sep  2 13:39:57.737: INFO: Pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.663828873s
STEP: Saw pod success
Sep  2 13:39:57.737: INFO: Pod "downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9" satisfied condition "Succeeded or Failed"
Sep  2 13:39:57.847: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9 container client-container: <nil>
STEP: delete the pod
Sep  2 13:39:58.084: INFO: Waiting for pod downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9 to disappear
Sep  2 13:39:58.195: INFO: Pod downwardapi-volume-fdc584a6-d438-46ba-b886-9a000a4aaee9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.027 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":145,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:58.452: INFO: Driver local doesn't support ext4 -- skipping
... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:39:59.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-134" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":11,"skipped":49,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:39:59.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6135" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":158,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:39:59.566: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Sep  2 13:39:44.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep  2 13:39:44.711: INFO: Waiting up to 5m0s for pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d" in namespace "emptydir-9381" to be "Succeeded or Failed"
Sep  2 13:39:44.821: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Pending", Reason="", readiness=false. Elapsed: 109.439699ms
Sep  2 13:39:46.932: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221381162s
Sep  2 13:39:49.043: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331941115s
Sep  2 13:39:51.153: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44222821s
Sep  2 13:39:53.264: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552794697s
Sep  2 13:39:55.374: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662717495s
Sep  2 13:39:57.484: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.772536187s
Sep  2 13:39:59.599: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.88754118s
STEP: Saw pod success
Sep  2 13:39:59.599: INFO: Pod "pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d" satisfied condition "Succeeded or Failed"
Sep  2 13:39:59.708: INFO: Trying to get logs from node ip-172-20-45-138.eu-central-1.compute.internal pod pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d container test-container: <nil>
STEP: delete the pod
Sep  2 13:39:59.934: INFO: Waiting for pod pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d to disappear
Sep  2 13:40:00.044: INFO: Pod pod-769e3f8e-cb8c-4318-b85d-d67938f6d59d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 31 lines ...
Sep  2 13:39:47.742: INFO: PersistentVolumeClaim pvc-bgck7 found but phase is Pending instead of Bound.
Sep  2 13:39:49.852: INFO: PersistentVolumeClaim pvc-bgck7 found and phase=Bound (8.547260743s)
Sep  2 13:39:49.852: INFO: Waiting up to 3m0s for PersistentVolume local-n2d66 to have phase Bound
Sep  2 13:39:49.961: INFO: PersistentVolume local-n2d66 found and phase=Bound (108.759252ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xzww
STEP: Creating a pod to test subpath
Sep  2 13:39:50.298: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xzww" in namespace "provisioning-7743" to be "Succeeded or Failed"
Sep  2 13:39:50.407: INFO: Pod "pod-subpath-test-preprovisionedpv-xzww": Phase="Pending", Reason="", readiness=false. Elapsed: 109.122198ms
Sep  2 13:39:52.518: INFO: Pod "pod-subpath-test-preprovisionedpv-xzww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219628985s
Sep  2 13:39:54.630: INFO: Pod "pod-subpath-test-preprovisionedpv-xzww": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331215003s
Sep  2 13:39:56.739: INFO: Pod "pod-subpath-test-preprovisionedpv-xzww": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44108619s
Sep  2 13:39:58.849: INFO: Pod "pod-subpath-test-preprovisionedpv-xzww": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550529806s
Sep  2 13:40:00.958: INFO: Pod "pod-subpath-test-preprovisionedpv-xzww": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.659897314s
STEP: Saw pod success
Sep  2 13:40:00.958: INFO: Pod "pod-subpath-test-preprovisionedpv-xzww" satisfied condition "Succeeded or Failed"
Sep  2 13:40:01.067: INFO: Trying to get logs from node ip-172-20-42-46.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-xzww container test-container-subpath-preprovisionedpv-xzww: <nil>
STEP: delete the pod
Sep  2 13:40:01.299: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xzww to disappear
Sep  2 13:40:01.408: INFO: Pod pod-subpath-test-preprovisionedpv-xzww no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xzww
Sep  2 13:40:01.408: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xzww" in namespace "provisioning-7743"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:40:02.950: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 42 lines ...
• [SLOW TEST:30.886 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:40:03.093: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 23 lines ...
Sep  2 13:39:59.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep  2 13:40:00.108: INFO: Waiting up to 5m0s for pod "pod-d38fd417-4740-49e3-96bb-f4262d430d27" in namespace "emptydir-8331" to be "Succeeded or Failed"
Sep  2 13:40:00.222: INFO: Pod "pod-d38fd417-4740-49e3-96bb-f4262d430d27": Phase="Pending", Reason="", readiness=false. Elapsed: 114.422164ms
Sep  2 13:40:02.334: INFO: Pod "pod-d38fd417-4740-49e3-96bb-f4262d430d27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226290894s
Sep  2 13:40:04.449: INFO: Pod "pod-d38fd417-4740-49e3-96bb-f4262d430d27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341633558s
Sep  2 13:40:06.563: INFO: Pod "pod-d38fd417-4740-49e3-96bb-f4262d430d27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.454911457s
STEP: Saw pod success
Sep  2 13:40:06.563: INFO: Pod "pod-d38fd417-4740-49e3-96bb-f4262d430d27" satisfied condition "Succeeded or Failed"
Sep  2 13:40:06.673: INFO: Trying to get logs from node ip-172-20-49-181.eu-central-1.compute.internal pod pod-d38fd417-4740-49e3-96bb-f4262d430d27 container test-container: <nil>
STEP: delete the pod
Sep  2 13:40:06.907: INFO: Waiting for pod pod-d38fd417-4740-49e3-96bb-f4262d430d27 to disappear
Sep  2 13:40:07.016: INFO: Pod pod-d38fd417-4740-49e3-96bb-f4262d430d27 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":55,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:40:07.266: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:40:09.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-4222" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":10,"skipped":90,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep  2 13:40:09.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-7693" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":13,"skipped":65,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:40:10.091: INFO: Only supported for providers [vsphere] (not aws)
... skipping 151 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":8,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:40:10.133: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 62 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395

      Driver csi-hostpath doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":10,"skipped":100,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep  2 13:40:10.177: INFO: Only supported for providers [vsphere] (not aws)
... skipping 85 lines ...
  Only supported for providers [gce] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/metadata_concealment.go:35
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":144,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep  2 13:40:00.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32220 lines ...






= Could not attach volume \\\"vol-0c2928bbb513138aa\\\" to node \\\"i-047270f15cfa27d5a\\\": could not attach volume \\\"vol-0c2928bbb513138aa\\\" to node \\\"i-047270f15cfa27d5a\\\": IncorrectState: vol-0c2928bbb513138aa is not 'available'.\\n\\tstatus code: 400, request id: b4d6f00b-b6d9-4f18-a5a8-f5a03287a544\"\nE0902 13:42:36.700096       1 tokens_controller.go:262] error synchronizing serviceaccount job-9294/default: secrets \"default-token-pbm2w\" is forbidden: unable to create new content in namespace job-9294 because it is being terminated\nI0902 13:42:36.782999       1 garbagecollector.go:471] \"Processing object\" object=\"job-9294/exceed-active-deadline--1-lb6wm\" objectUID=8d4a8b88-533f-4d6f-8327-3da04d3670ef kind=\"Pod\" virtual=false\nI0902 13:42:36.783030       1 garbagecollector.go:471] \"Processing object\" object=\"job-9294/exceed-active-deadline--1-ltcnb\" objectUID=440418cd-3dd1-4a72-9115-98993d711d34 kind=\"Pod\" virtual=false\nI0902 13:42:36.783254       1 job_controller.go:406] enqueueing job job-9294/exceed-active-deadline\nI0902 13:42:36.857123       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-a64e73e7-232e-4983-94be-822230e2d76f\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0632c11b53cc8cb0b\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nE0902 13:42:36.935980       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0902 13:42:37.006086       1 disruption.go:534] Error syncing PodDisruptionBudget disruption-5612/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nE0902 13:42:37.112157       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:37.709800       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0c2928bbb513138aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c2928bbb513138aa\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:37.875468       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:37.875580       1 event.go:294] \"Event occurred\" object=\"volume-3640/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-whgq2\\\" \"\nI0902 13:42:38.258837       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"ebs.csi.aws.com-vol-0c2928bbb513138aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c2928bbb513138aa\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:38.258997       1 actual_state_of_world.go:350] Volume \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c2928bbb513138aa\" is already added to attachedVolume list to node \"ip-172-20-61-191.eu-central-1.compute.internal\", update device path \"\"\nI0902 13:42:38.258947       1 event.go:294] \"Event occurred\" object=\"volume-4754/exec-volume-test-inlinevolume-vq99\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"ebs.csi.aws.com-vol-0c2928bbb513138aa\\\" \"\nE0902 13:42:38.355301       1 tokens_controller.go:262] error synchronizing serviceaccount dns-4674/default: secrets \"default-token-cgwpr\" is forbidden: unable to create new content in namespace dns-4674 because it is being terminated\nE0902 13:42:38.595389       1 namespace_controller.go:162] deletion of namespace kubectl-1430 failed: unexpected items still remain in namespace: kubectl-1430 for gvr: /v1, Resource=pods\nI0902 13:42:38.639855       1 controller.go:400] Ensuring load balancer for service deployment-9199/test-rolling-update-with-lb\nI0902 13:42:38.639966       1 controller.go:901] Adding finalizer to service deployment-9199/test-rolling-update-with-lb\nI0902 13:42:38.640790       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI0902 13:42:38.657268       1 aws.go:3907] EnsureLoadBalancer(e2e-3c2263334e-b172d.test-cncf-aws.k8s.io, deployment-9199, test-rolling-update-with-lb, eu-central-1, , [{ TCP <nil> 80 {0 80 } 32684}], map[])\nI0902 13:42:38.779535       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8353\nI0902 13:42:39.003886       1 namespace_controller.go:185] Namespace has been deleted resourcequota-5825\nE0902 13:42:39.045999       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-7543/default: secrets \"default-token-7t7n2\" is forbidden: unable to create new content in namespace resourcequota-7543 because it is being terminated\nE0902 13:42:39.143365       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-9567/pvc-45v7w: storageclass.storage.k8s.io \"volumemode-9567\" not found\nI0902 13:42:39.143997       1 event.go:294] \"Event occurred\" object=\"volumemode-9567/pvc-45v7w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-9567\\\" not found\"\nI0902 13:42:39.170726       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-7543/test-quota\nI0902 13:42:39.255363       1 pv_controller.go:879] volume \"local-mm5lj\" entered phase \"Available\"\nE0902 13:42:39.345985       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-5988/default: secrets \"default-token-frc9d\" is forbidden: unable to create new content in namespace volume-expand-5988 because it is being terminated\nI0902 13:42:39.506749       1 aws.go:3128] Existing security group ingress: sg-0e1069f261fd831ae []\nI0902 13:42:39.506794       1 aws.go:3159] Adding security group ingress: sg-0e1069f261fd831ae [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI0902 13:42:39.578960       1 namespace_controller.go:185] Namespace has been deleted sctp-1309\nI0902 13:42:39.655910       1 aws_loadbalancer.go:1013] Creating load balancer for deployment-9199/test-rolling-update-with-lb with name: a875bc7be8b554a8e807d83bddb80150\nI0902 13:42:40.157544       1 aws_loadbalancer.go:1216] Updating load-balancer attributes for \"a875bc7be8b554a8e807d83bddb80150\"\nE0902 13:42:40.163140       1 controller.go:307] error processing service deployment-9199/test-rolling-update-with-lb (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io/i-0f2ede71dd01a1095 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:768319786644:loadbalancer/a875bc7be8b554a8e807d83bddb80150\\n\\tstatus code: 403, request id: a34e47d7-4b0a-4865-b261-771e657b5a28\"\nI0902 13:42:40.163550       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"SyncLoadBalancerFailed\" message=\"Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \\\"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io/i-0f2ede71dd01a1095 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:768319786644:loadbalancer/a875bc7be8b554a8e807d83bddb80150\\\\n\\\\tstatus code: 403, request id: a34e47d7-4b0a-4865-b261-771e657b5a28\\\"\"\nI0902 13:42:40.278132       1 namespace_controller.go:185] Namespace has been deleted replicaset-957\nE0902 13:42:40.285879       1 tokens_controller.go:262] error synchronizing serviceaccount projected-7301/default: secrets \"default-token-tks4z\" is forbidden: unable to create new content in namespace projected-7301 because it is being terminated\nI0902 13:42:40.392435       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:40.397786       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:40.479373       1 graph_builder.go:587] add [v1/Pod, namespace: gc-5050, name: pod1, uid: ed405c15-5167-4698-b334-cf8eaaf4da6a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:42:40.479945       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod2\" objectUID=7cc9601c-4448-4048-8dc6-ed8a80d2f641 kind=\"Pod\" virtual=false\nI0902 13:42:40.480077       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod1\" objectUID=ed405c15-5167-4698-b334-cf8eaaf4da6a kind=\"Pod\" virtual=false\nI0902 13:42:40.484072       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5050, name: pod2, uid: 7cc9601c-4448-4048-8dc6-ed8a80d2f641] to attemptToDelete, because its owner [v1/Pod, namespace: gc-5050, name: pod1, uid: ed405c15-5167-4698-b334-cf8eaaf4da6a] is deletingDependents\nI0902 13:42:40.489955       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5050, name: pod2, uid: 7cc9601c-4448-4048-8dc6-ed8a80d2f641] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:42:40.496368       1 graph_builder.go:587] add [v1/Pod, namespace: gc-5050, name: pod2, uid: 7cc9601c-4448-4048-8dc6-ed8a80d2f641] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:42:40.496443       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod3\" objectUID=6db2b81d-c1d5-420e-a06d-6c109ae0bb85 kind=\"Pod\" virtual=false\nI0902 13:42:40.496801       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod2\" objectUID=7cc9601c-4448-4048-8dc6-ed8a80d2f641 kind=\"Pod\" virtual=false\nI0902 13:42:40.505993       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5050, name: pod3, uid: 6db2b81d-c1d5-420e-a06d-6c109ae0bb85] to attemptToDelete, because its owner [v1/Pod, namespace: gc-5050, name: pod2, uid: 7cc9601c-4448-4048-8dc6-ed8a80d2f641] is deletingDependents\nI0902 13:42:40.507926       1 garbagecollector.go:545] processing object [v1/Pod, namespace: gc-5050, name: pod3, uid: 6db2b81d-c1d5-420e-a06d-6c109ae0bb85], some of its owners and its dependent [[v1/Pod, namespace: gc-5050, name: pod1, uid: ed405c15-5167-4698-b334-cf8eaaf4da6a]] have FinalizerDeletingDependents, to prevent potential cycle, its ownerReferences are going to be modified to be non-blocking, then the object is going to be deleted with Foreground\nI0902 13:42:40.511919       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5050, name: pod3, uid: 6db2b81d-c1d5-420e-a06d-6c109ae0bb85] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:42:40.513905       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod2\" objectUID=7cc9601c-4448-4048-8dc6-ed8a80d2f641 kind=\"Pod\" virtual=false\nI0902 13:42:40.524890       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-5050, name: pod2, uid: 7cc9601c-4448-4048-8dc6-ed8a80d2f641]\nI0902 13:42:40.527053       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod3\" objectUID=6db2b81d-c1d5-420e-a06d-6c109ae0bb85 kind=\"Pod\" virtual=false\nI0902 13:42:40.527424       1 graph_builder.go:587] add [v1/Pod, namespace: gc-5050, name: pod3, uid: 6db2b81d-c1d5-420e-a06d-6c109ae0bb85] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:42:40.527462       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod1\" objectUID=ed405c15-5167-4698-b334-cf8eaaf4da6a kind=\"Pod\" virtual=false\nI0902 13:42:40.529100       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod3\" objectUID=6db2b81d-c1d5-420e-a06d-6c109ae0bb85 kind=\"Pod\" virtual=false\nI0902 13:42:40.538172       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod3\" objectUID=6db2b81d-c1d5-420e-a06d-6c109ae0bb85 kind=\"Pod\" virtual=false\nI0902 13:42:40.538568       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod1\" objectUID=ed405c15-5167-4698-b334-cf8eaaf4da6a kind=\"Pod\" virtual=false\nI0902 13:42:40.546653       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-5050, name: pod1, uid: ed405c15-5167-4698-b334-cf8eaaf4da6a]\nI0902 13:42:40.551546       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5050/pod3\" objectUID=6db2b81d-c1d5-420e-a06d-6c109ae0bb85 kind=\"Pod\" virtual=false\nI0902 13:42:40.553332       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-5050, name: pod3, uid: 6db2b81d-c1d5-420e-a06d-6c109ae0bb85]\nI0902 13:42:40.963998       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-3184/csi-hostpathf4cqk\"\nI0902 13:42:40.970789       1 pv_controller.go:640] volume \"pvc-f5e5e12a-a87b-4640-b1f3-b124442e002a\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:42:40.974614       1 pv_controller.go:879] volume \"pvc-f5e5e12a-a87b-4640-b1f3-b124442e002a\" entered phase \"Released\"\nI0902 13:42:40.976863       1 pv_controller.go:1340] isVolumeReleased[pvc-f5e5e12a-a87b-4640-b1f3-b124442e002a]: volume is released\nI0902 13:42:40.988101       1 pv_controller_base.go:505] deletion of claim \"provisioning-3184/csi-hostpathf4cqk\" was already processed\nI0902 13:42:41.347364       1 namespace_controller.go:185] Namespace has been deleted kubectl-5678\nE0902 13:42:41.449456       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-752-3388/default: secrets \"default-token-4rzbp\" is forbidden: unable to create new content in namespace ephemeral-752-3388 because it is being terminated\nE0902 13:42:42.957961       1 tokens_controller.go:262] error synchronizing serviceaccount projected-3921/default: secrets \"default-token-76kvf\" is forbidden: unable to create new content in namespace projected-3921 because it is being terminated\nE0902 13:42:43.088064       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0902 13:42:43.181116       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-655/default: secrets \"default-token-nqrc6\" is forbidden: unable to create new content in namespace provisioning-655 because it is being terminated\nE0902 13:42:43.193737       1 tokens_controller.go:262] error synchronizing serviceaccount volume-658/default: secrets \"default-token-7kccg\" is forbidden: unable to create new content in namespace volume-658 because it is being terminated\nI0902 13:42:43.373889       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05d4c067cc3b9501f\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:43.423905       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05d4c067cc3b9501f\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:43.535715       1 namespace_controller.go:185] Namespace has been deleted dns-4674\nE0902 13:42:43.588269       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0902 13:42:44.140291       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-3359/default: secrets \"default-token-df97j\" is forbidden: unable to create new content in namespace downward-api-3359 because it is being terminated\nI0902 13:42:44.339051       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5753^4\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:42:44.342614       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5753^4\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:42:44.373878       1 namespace_controller.go:185] Namespace has been deleted resourcequota-7543\nI0902 13:42:44.509786       1 namespace_controller.go:185] Namespace has been deleted disruption-8982\nI0902 13:42:44.591474       1 namespace_controller.go:185] Namespace has been deleted volume-expand-5988\nI0902 13:42:44.801600       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-5753/pvc-58t4h\"\nI0902 13:42:44.809465       1 pv_controller.go:640] volume \"pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:42:44.812546       1 pv_controller.go:879] volume \"pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba\" entered phase \"Released\"\nI0902 13:42:44.815140       1 pv_controller.go:1340] isVolumeReleased[pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba]: volume is released\nE0902 13:42:44.827620       1 pv_protection_controller.go:118] PV pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba\": the object has been modified; please apply your changes to the latest version and try again\nI0902 13:42:44.831902       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-5753/pvc-58t4h\" was already processed\nI0902 13:42:44.904912       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-5521c77d-cbaa-4924-a5ce-1921cf8a75ba\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5753^4\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:42:44.942717       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-9969/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0902 13:42:44.943161       1 event.go:294] \"Event occurred\" object=\"webhook-9969/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0902 13:42:44.956400       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-9969/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:42:44.958221       1 event.go:294] \"Event occurred\" object=\"webhook-9969/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-px48d\"\nI0902 13:42:45.163929       1 controller.go:400] Ensuring load balancer for service deployment-9199/test-rolling-update-with-lb\nI0902 13:42:45.163985       1 aws.go:3907] EnsureLoadBalancer(e2e-3c2263334e-b172d.test-cncf-aws.k8s.io, deployment-9199, test-rolling-update-with-lb, eu-central-1, , [{ TCP <nil> 80 {0 80 } 32684}], map[])\nI0902 13:42:45.164349       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI0902 13:42:45.405999       1 aws.go:3128] Existing security group ingress: sg-0e1069f261fd831ae [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI0902 13:42:45.421967       1 aws_loadbalancer.go:1189] Creating additional load balancer tags for a875bc7be8b554a8e807d83bddb80150\nI0902 13:42:45.431419       1 aws_loadbalancer.go:1216] Updating load-balancer attributes for \"a875bc7be8b554a8e807d83bddb80150\"\nE0902 13:42:45.436516       1 controller.go:307] error processing service deployment-9199/test-rolling-update-with-lb (will retry): failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io/i-0f2ede71dd01a1095 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:768319786644:loadbalancer/a875bc7be8b554a8e807d83bddb80150\\n\\tstatus code: 403, request id: 459aa4c4-3ac4-4fe2-bb61-6865b3632194\"\nI0902 13:42:45.436650       1 namespace_controller.go:185] Namespace has been deleted projected-7301\nI0902 13:42:45.436746       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"SyncLoadBalancerFailed\" message=\"Error syncing load balancer: failed to ensure load balancer: Unable to update load balancer attributes during attribute sync: \\\"AccessDenied: User: arn:aws:sts::768319786644:assumed-role/masters.e2e-3c2263334e-b172d.test-cncf-aws.k8s.io/i-0f2ede71dd01a1095 is not authorized to perform: elasticloadbalancing:ModifyLoadBalancerAttributes on resource: arn:aws:elasticloadbalancing:eu-central-1:768319786644:loadbalancer/a875bc7be8b554a8e807d83bddb80150\\\\n\\\\tstatus code: 403, request id: 459aa4c4-3ac4-4fe2-bb61-6865b3632194\\\"\"\nI0902 13:42:45.686356       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05d4c067cc3b9501f\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:45.686513       1 event.go:294] \"Event occurred\" object=\"volume-expand-527/pod-58e20284-7790-4e67-8d75-c3c70150a51c\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\\\" \"\nE0902 13:42:45.934336       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-6244/default: secrets \"default-token-d6ms2\" is forbidden: unable to create new content in namespace ephemeral-6244 because it is being terminated\nI0902 13:42:45.978712       1 event.go:294] \"Event occurred\" object=\"volume-expand-4808/awslpqx8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:42:46.239239       1 namespace_controller.go:185] Namespace has been deleted volume-5746\nI0902 13:42:46.603739       1 namespace_controller.go:185] Namespace has been deleted ephemeral-752-3388\nI0902 13:42:47.554802       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7896-96/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0902 13:42:47.779603       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:47.785712       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0902 13:42:47.873247       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:48.284796       1 namespace_controller.go:185] Namespace has been deleted volume-658\nI0902 13:42:48.300032       1 namespace_controller.go:185] Namespace has been deleted provisioning-655\nI0902 13:42:48.794255       1 pv_controller.go:930] claim \"volumemode-9567/pvc-45v7w\" bound to volume \"local-mm5lj\"\nI0902 13:42:48.794560       1 event.go:294] \"Event occurred\" object=\"volume-expand-4808/awslpqx8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:42:48.835254       1 pv_controller.go:879] volume \"local-mm5lj\" entered phase \"Bound\"\nI0902 13:42:48.835289       1 pv_controller.go:982] volume \"local-mm5lj\" bound to claim \"volumemode-9567/pvc-45v7w\"\nI0902 13:42:48.859771       1 pv_controller.go:823] claim \"volumemode-9567/pvc-45v7w\" entered phase \"Bound\"\nE0902 13:42:49.041941       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-3590/default: secrets \"default-token-454rc\" is forbidden: unable to create new content in namespace downward-api-3590 because it is being terminated\nE0902 13:42:49.088317       1 namespace_controller.go:162] deletion of namespace kubectl-1430 failed: unexpected items still remain in namespace: kubectl-1430 for gvr: /v1, Resource=pods\nI0902 13:42:49.186332       1 namespace_controller.go:185] Namespace has been deleted downward-api-3359\nI0902 13:42:49.600416       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"ebs.csi.aws.com-vol-0c2928bbb513138aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c2928bbb513138aa\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:49.602767       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"ebs.csi.aws.com-vol-0c2928bbb513138aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c2928bbb513138aa\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:50.146886       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:50.146983       1 event.go:294] \"Event occurred\" object=\"volume-126/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\\\" \"\nE0902 13:42:50.182071       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-5753/default: secrets \"default-token-tjqq8\" is forbidden: unable to create new content in namespace csi-mock-volumes-5753 because it is being terminated\nI0902 13:42:51.134796       1 namespace_controller.go:185] Namespace has been deleted cronjob-2714\nI0902 13:42:51.157129       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6244\nE0902 13:42:51.171372       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2998/default: secrets \"default-token-wpbrj\" is forbidden: unable to create new content in namespace provisioning-2998 because it is being terminated\nE0902 13:42:51.277889       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-5612/default: secrets \"default-token-klxn8\" is forbidden: unable to create new content in namespace disruption-5612 because it is being terminated\nI0902 13:42:51.365546       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6244-9585/csi-hostpathplugin-586c8ff66d\" objectUID=5f3e3d32-ab30-4670-a3ba-c33549503899 kind=\"ControllerRevision\" virtual=false\nI0902 13:42:51.365561       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-6244-9585/csi-hostpathplugin\nI0902 13:42:51.365673       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6244-9585/csi-hostpathplugin-0\" objectUID=3bf3ca8b-da76-4861-b69b-071b8ca87fe9 kind=\"Pod\" virtual=false\nI0902 13:42:51.387482       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6244-9585/csi-hostpathplugin-0\" objectUID=3bf3ca8b-da76-4861-b69b-071b8ca87fe9 kind=\"Pod\" propagationPolicy=Background\nI0902 13:42:51.387654       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6244-9585/csi-hostpathplugin-586c8ff66d\" objectUID=5f3e3d32-ab30-4670-a3ba-c33549503899 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:42:51.483462       1 namespace_controller.go:185] Namespace has been deleted provisioning-3184\nE0902 13:42:51.659912       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:51.839268       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3184-4941/csi-hostpathplugin-6fbfc4474\" objectUID=c2d3c050-28f9-4711-ba12-66262a113589 kind=\"ControllerRevision\" virtual=false\nI0902 13:42:51.839696       1 stateful_set.go:440] StatefulSet has been deleted provisioning-3184-4941/csi-hostpathplugin\nI0902 13:42:51.839816       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-3184-4941/csi-hostpathplugin-0\" objectUID=c49d3849-c353-46f5-9871-19651ad6164a kind=\"Pod\" virtual=false\nI0902 13:42:51.841552       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3184-4941/csi-hostpathplugin-6fbfc4474\" objectUID=c2d3c050-28f9-4711-ba12-66262a113589 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:42:51.842049       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-3184-4941/csi-hostpathplugin-0\" objectUID=c49d3849-c353-46f5-9871-19651ad6164a kind=\"Pod\" propagationPolicy=Background\nE0902 13:42:52.145233       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:52.219440       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-7334/pvc-p8rts\"\nE0902 13:42:52.234988       1 tokens_controller.go:262] error synchronizing serviceaccount node-lease-test-1545/default: secrets \"default-token-q2qpx\" is forbidden: unable to create new content in namespace node-lease-test-1545 because it is being terminated\nI0902 13:42:52.267506       1 pv_controller.go:640] volume \"pvc-ef900183-85b4-4b3f-8c8c-8f580707e704\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:42:52.277357       1 pv_controller.go:879] volume \"pvc-ef900183-85b4-4b3f-8c8c-8f580707e704\" entered phase \"Released\"\nI0902 13:42:52.288905       1 pv_controller.go:1340] isVolumeReleased[pvc-ef900183-85b4-4b3f-8c8c-8f580707e704]: volume is released\nE0902 13:42:52.475013       1 tokens_controller.go:262] error synchronizing serviceaccount services-8871/default: secrets \"default-token-6jtql\" is forbidden: unable to create new content in namespace services-8871 because it is being terminated\nE0902 13:42:52.940521       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:53.377230       1 namespace_controller.go:185] Namespace has been deleted projected-3921\nI0902 13:42:53.440551       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7896/pvc-vhnpd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7896\\\" or manually created by system administrator\"\nI0902 13:42:53.450925       1 pv_controller.go:879] volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" entered phase \"Bound\"\nI0902 13:42:53.450958       1 pv_controller.go:982] volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" bound to claim \"csi-mock-volumes-7896/pvc-vhnpd\"\nI0902 13:42:53.457787       1 pv_controller.go:823] claim \"csi-mock-volumes-7896/pvc-vhnpd\" entered phase \"Bound\"\nI0902 13:42:53.953518       1 namespace_controller.go:185] Namespace has been deleted security-context-test-272\nI0902 13:42:53.972789       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7896^4\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:54.093548       1 namespace_controller.go:185] Namespace has been deleted downward-api-3590\nI0902 13:42:54.283209       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-391/httpd\" objectUID=b0f9ca74-f865-4c4e-afd6-3a021dfd54e8 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:42:54.294586       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-391/httpd\" objectUID=b0f9ca74-f865-4c4e-afd6-3a021dfd54e8 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:42:54.510407       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7896/pvc-volume-tester-b9ckx\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\\\" \"\nI0902 13:42:54.510584       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7896^4\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:54.838033       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-5b84b9c986\" objectUID=e93d40a9-990e-4c04-86f8-d2ec0cef6f23 kind=\"ControllerRevision\" virtual=false\nI0902 13:42:54.838347       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5753-1914/csi-mockplugin\nI0902 13:42:54.838377       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-0\" objectUID=5213cddb-51fe-4314-b153-c49933a2285b kind=\"Pod\" virtual=false\nI0902 13:42:54.842169       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-0\" objectUID=5213cddb-51fe-4314-b153-c49933a2285b kind=\"Pod\" propagationPolicy=Background\nI0902 13:42:54.842312       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-5b84b9c986\" objectUID=e93d40a9-990e-4c04-86f8-d2ec0cef6f23 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:42:54.958401       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-attacher-d7fc965bd\" objectUID=8651cf22-2188-471a-9c87-ce23d82b35d4 kind=\"ControllerRevision\" virtual=false\nI0902 13:42:54.959333       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-5753-1914/csi-mockplugin-attacher\nI0902 13:42:54.959507       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-attacher-0\" objectUID=f6b99cb6-9f2d-4855-ab7e-44ab852441eb kind=\"Pod\" virtual=false\nI0902 13:42:54.969454       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-attacher-d7fc965bd\" objectUID=8651cf22-2188-471a-9c87-ce23d82b35d4 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:42:54.981293       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-5753-1914/csi-mockplugin-attacher-0\" objectUID=f6b99cb6-9f2d-4855-ab7e-44ab852441eb kind=\"Pod\" propagationPolicy=Background\nE0902 13:42:54.981607       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:55.281909       1 namespace_controller.go:185] Namespace has been deleted kubectl-9192\nI0902 13:42:55.381976       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5753\nI0902 13:42:55.437714       1 controller.go:400] Ensuring load balancer for service deployment-9199/test-rolling-update-with-lb\nI0902 13:42:55.438087       1 aws.go:3907] EnsureLoadBalancer(e2e-3c2263334e-b172d.test-cncf-aws.k8s.io, deployment-9199, test-rolling-update-with-lb, eu-central-1, , [{ TCP <nil> 80 {0 80 } 32684}], map[])\nI0902 13:42:55.438239       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuringLoadBalancer\" message=\"Ensuring load balancer\"\nI0902 13:42:55.735830       1 aws.go:3128] Existing security group ingress: sg-0e1069f261fd831ae [{\n  FromPort: 80,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 80\n} {\n  FromPort: 3,\n  IpProtocol: \"icmp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 4\n}]\nI0902 13:42:55.778579       1 aws_loadbalancer.go:1189] Creating additional load balancer tags for a875bc7be8b554a8e807d83bddb80150\nI0902 13:42:55.791832       1 aws_loadbalancer.go:1216] Updating load-balancer attributes for \"a875bc7be8b554a8e807d83bddb80150\"\nI0902 13:42:56.015480       1 aws.go:4526] Adding rule for traffic from the load balancer (sg-0e1069f261fd831ae) to instances (sg-0512291329167a3dd)\nI0902 13:42:56.071854       1 aws.go:3203] Existing security group ingress: sg-0512291329167a3dd [{\n  FromPort: 30000,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n} {\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-0512291329167a3dd\",\n      UserId: \"768319786644\"\n    },{\n      GroupId: \"sg-0a730243e78887ea7\",\n      UserId: \"768319786644\"\n    }]\n} {\n  FromPort: 22,\n  IpProtocol: \"tcp\",\n  IpRanges: [{\n      CidrIp: \"35.225.74.23/32\"\n    }],\n  ToPort: 22\n} {\n  FromPort: 30000,\n  IpProtocol: \"udp\",\n  IpRanges: [{\n      CidrIp: \"0.0.0.0/0\"\n    }],\n  ToPort: 32767\n}]\nI0902 13:42:56.071919       1 aws.go:3100] Comparing sg-0e1069f261fd831ae to sg-0512291329167a3dd\nI0902 13:42:56.071924       1 aws.go:3100] Comparing sg-0e1069f261fd831ae to sg-0a730243e78887ea7\nI0902 13:42:56.071929       1 aws.go:3231] Adding security group ingress: sg-0512291329167a3dd [{\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-0e1069f261fd831ae\"\n    }]\n}]\nI0902 13:42:56.226231       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0902 13:42:56.231236       1 namespace_controller.go:185] Namespace has been deleted gc-5050\nI0902 13:42:56.322238       1 aws_loadbalancer.go:1464] Instances added to load-balancer a875bc7be8b554a8e807d83bddb80150\nI0902 13:42:56.322629       1 aws.go:4292] Loadbalancer a875bc7be8b554a8e807d83bddb80150 (deployment-9199/test-rolling-update-with-lb) has DNS name a875bc7be8b554a8e807d83bddb80150-1518655651.eu-central-1.elb.amazonaws.com\nI0902 13:42:56.322779       1 controller.go:942] Patching status for service deployment-9199/test-rolling-update-with-lb\nI0902 13:42:56.323511       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"EnsuredLoadBalancer\" message=\"Ensured load balancer\"\nI0902 13:42:56.347094       1 namespace_controller.go:185] Namespace has been deleted provisioning-2998\nE0902 13:42:57.158438       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:57.272893       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-5411/awskp5f4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nW0902 13:42:57.355495       1 reconciler.go:335] Multi-Attach error for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-61-191.eu-central-1.compute.internal and can't be attached to another\nI0902 13:42:57.357059       1 event.go:294] \"Event occurred\" object=\"volume-3640/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"aws-whgq2\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nE0902 13:42:57.428713       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3184-4941/default: secrets \"default-token-sdts9\" is forbidden: unable to create new content in namespace provisioning-3184-4941 because it is being terminated\nI0902 13:42:57.516832       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-5411/awskp5f4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:42:57.650299       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1457/pvc-69qsk\"\nI0902 13:42:57.660492       1 pv_controller.go:640] volume \"local-pjc4l\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:42:57.667789       1 pv_controller.go:879] volume \"local-pjc4l\" entered phase \"Released\"\nI0902 13:42:57.766290       1 pv_controller_base.go:505] deletion of claim \"volume-1457/pvc-69qsk\" was already processed\nE0902 13:42:57.848142       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:57.892162       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-1545\nI0902 13:42:57.992910       1 namespace_controller.go:185] Namespace has been deleted subpath-8646\nI0902 13:42:58.043688       1 namespace_controller.go:185] Namespace has been deleted services-8871\nE0902 13:42:58.815857       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:42:58.909594       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9969/e2e-test-webhook-4khcl\" objectUID=5cd38a10-a561-4f6a-b78d-4ed9e47d145e kind=\"EndpointSlice\" virtual=false\nI0902 13:42:58.921994       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9969/e2e-test-webhook-4khcl\" objectUID=5cd38a10-a561-4f6a-b78d-4ed9e47d145e kind=\"EndpointSlice\" propagationPolicy=Background\nI0902 13:42:59.061871       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9969/sample-webhook-deployment-78988fc6cd\" objectUID=a90c4eac-cf7d-4e2b-a957-f3176bc6d761 kind=\"ReplicaSet\" virtual=false\nI0902 13:42:59.062256       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-9969/sample-webhook-deployment\"\nI0902 13:42:59.074905       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9969/sample-webhook-deployment-78988fc6cd\" objectUID=a90c4eac-cf7d-4e2b-a957-f3176bc6d761 kind=\"ReplicaSet\" propagationPolicy=Background\nI0902 13:42:59.075097       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-527/awswl9g2\"\nI0902 13:42:59.080062       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9969/sample-webhook-deployment-78988fc6cd-px48d\" objectUID=8e4bb037-8ca3-46e6-96e7-62aee06af5d4 kind=\"Pod\" virtual=false\nI0902 13:42:59.083678       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9969/sample-webhook-deployment-78988fc6cd-px48d\" objectUID=8e4bb037-8ca3-46e6-96e7-62aee06af5d4 kind=\"Pod\" propagationPolicy=Background\nI0902 13:42:59.095444       1 pv_controller.go:640] volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:42:59.102084       1 pv_controller.go:879] volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" entered phase \"Released\"\nI0902 13:42:59.102567       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9969/sample-webhook-deployment-78988fc6cd-px48d\" objectUID=03bb65f5-f7b5-4479-80a3-1d85ed509cf5 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:42:59.108276       1 pv_controller.go:1340] isVolumeReleased[pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c]: volume is released\nI0902 13:42:59.110815       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9969/sample-webhook-deployment-78988fc6cd-px48d\" objectUID=03bb65f5-f7b5-4479-80a3-1d85ed509cf5 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0902 13:42:59.406476       1 tokens_controller.go:262] error synchronizing serviceaccount apply-650/default: secrets \"default-token-w2jpr\" is forbidden: unable to create new content in namespace apply-650 because it is being terminated\nI0902 13:42:59.694769       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:59.706200       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ef900183-85b4-4b3f-8c8c-8f580707e704\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7334^4\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:59.715509       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:59.722367       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ef900183-85b4-4b3f-8c8c-8f580707e704\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7334^4\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:42:59.722666       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:59.754560       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05d4c067cc3b9501f\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:59.754808       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:59.768586       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05d4c067cc3b9501f\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:42:59.885540       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0902 13:43:00.096624       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0902 13:43:00.153489       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:00.159947       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176503\nI0902 13:43:00.175023       1 event.go:294] \"Event occurred\" object=\"cronjob-1731/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job successful-jobs-history-limit-27176503\"\nI0902 13:43:00.196784       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176503\nI0902 13:43:00.201096       1 event.go:294] \"Event occurred\" object=\"cronjob-5301/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27176503\"\nI0902 13:43:00.208841       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176503\nI0902 13:43:00.209020       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:00.209633       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-1731/successful-jobs-history-limit\" resourceVersion=\"23167\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"successful-jobs-history-limit\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:43:00.209654       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-1731/successful-jobs-history-limit, requeuing: Operation cannot be fulfilled on cronjobs.batch \"successful-jobs-history-limit\": the object has been modified; please apply your changes to the latest version and try again\nI0902 13:43:00.211467       1 event.go:294] \"Event occurred\" object=\"cronjob-5301/concurrent-27176503\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27176503--1-6jlzb\"\nI0902 13:43:00.211501       1 event.go:294] \"Event occurred\" object=\"cronjob-1731/successful-jobs-history-limit-27176503\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: successful-jobs-history-limit-27176503--1-fmb4j\"\nI0902 13:43:00.224630       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:00.230178       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-5301/concurrent\" resourceVersion=\"23938\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:43:00.231447       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-5301/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0902 13:43:00.242153       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176503\nI0902 13:43:00.242270       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:00.245362       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176503\nI0902 13:43:00.265993       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:00.271597       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ef900183-85b4-4b3f-8c8c-8f580707e704\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7334^4\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:00.615433       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/pvc-c8v6t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0902 13:43:00.717867       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-5806/inline-volume-7q4h6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0902 13:43:00.718170       1 event.go:294] \"Event occurred\" object=\"ephemeral-5806/inline-volume-7q4h6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0902 13:43:00.745625       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558b\" need=1 creating=1\nI0902 13:43:00.745986       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558b to 1\"\nI0902 13:43:00.762200       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bd5sjk\"\nI0902 13:43:00.793671       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment-608e86e2-8363-4935-8a08-43843864394a\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:43:00.793793       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/pvc-c8v6t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE0902 13:43:00.895202       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-391/default: secrets \"default-token-vmdl8\" is forbidden: unable to create new content in namespace kubectl-391 because it is being terminated\nE0902 13:43:00.922107       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-8268/default: secrets \"default-token-79cxs\" is forbidden: unable to create new content in namespace secrets-8268 because it is being terminated\nI0902 13:43:00.950924       1 pv_controller.go:879] volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" entered phase \"Bound\"\nI0902 13:43:00.950962       1 pv_controller.go:982] volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" bound to claim \"fsgroupchangepolicy-5411/awskp5f4\"\nI0902 13:43:00.961188       1 pv_controller.go:823] claim \"fsgroupchangepolicy-5411/awskp5f4\" entered phase \"Bound\"\nI0902 13:43:01.057658       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-5806, name: inline-volume-7q4h6, uid: 77b9e0c5-17e6-44c0-a49b-78061e363501] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:01.058229       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-7q4h6-my-volume\" objectUID=0ff0a70c-bd50-4873-8343-327fad9dde36 kind=\"PersistentVolumeClaim\" virtual=false\nI0902 13:43:01.058296       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-7q4h6\" objectUID=77b9e0c5-17e6-44c0-a49b-78061e363501 kind=\"Pod\" virtual=false\nI0902 13:43:01.066964       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-5806, name: inline-volume-7q4h6-my-volume, uid: 0ff0a70c-bd50-4873-8343-327fad9dde36] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-5806, name: inline-volume-7q4h6, uid: 77b9e0c5-17e6-44c0-a49b-78061e363501] is deletingDependents\nI0902 13:43:01.070050       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5806/inline-volume-7q4h6-my-volume\" objectUID=0ff0a70c-bd50-4873-8343-327fad9dde36 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0902 13:43:01.080061       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-5806/inline-volume-7q4h6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0902 13:43:01.080663       1 event.go:294] \"Event occurred\" object=\"ephemeral-5806/inline-volume-7q4h6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0902 13:43:01.081028       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-7q4h6-my-volume\" objectUID=0ff0a70c-bd50-4873-8343-327fad9dde36 kind=\"PersistentVolumeClaim\" virtual=false\nI0902 13:43:01.086772       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-5806/inline-volume-7q4h6-my-volume\"\nI0902 13:43:01.095343       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-7q4h6\" objectUID=77b9e0c5-17e6-44c0-a49b-78061e363501 kind=\"Pod\" virtual=false\nI0902 13:43:01.098150       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-5806, name: inline-volume-7q4h6, uid: 77b9e0c5-17e6-44c0-a49b-78061e363501]\nE0902 13:43:01.153717       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2170/default: secrets \"default-token-6557z\" is forbidden: unable to create new content in namespace provisioning-2170 because it is being terminated\nE0902 13:43:01.351357       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8498/pvc-6dnls: storageclass.storage.k8s.io \"provisioning-8498\" not found\nI0902 13:43:01.351604       1 event.go:294] \"Event occurred\" object=\"provisioning-8498/pvc-6dnls\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8498\\\" not found\"\nI0902 13:43:01.410257       1 namespace_controller.go:185] Namespace has been deleted disruption-5612\nE0902 13:43:01.447130       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:01.468830       1 pv_controller.go:879] volume \"local-qgtss\" entered phase \"Available\"\nE0902 13:43:01.526744       1 tokens_controller.go:262] error synchronizing serviceaccount dns-6205/default: secrets \"default-token-w8nkl\" is forbidden: unable to create new content in namespace dns-6205 because it is being terminated\nI0902 13:43:01.574900       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02fdcca57b1c17075\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:01.811730       1 event.go:294] \"Event occurred\" object=\"volumelimits-2455-4539/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0902 13:43:02.263319       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6244-9585\nI0902 13:43:02.345499       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0902 13:43:02.664203       1 namespace_controller.go:185] Namespace has been deleted provisioning-3184-4941\nI0902 13:43:03.237506       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"ebs.csi.aws.com-vol-0c2928bbb513138aa\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c2928bbb513138aa\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nE0902 13:43:03.750585       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-9969/default: secrets \"default-token-h428b\" is forbidden: unable to create new content in namespace webhook-9969 because it is being terminated\nI0902 13:43:03.794530       1 pv_controller.go:930] claim \"provisioning-8498/pvc-6dnls\" bound to volume \"local-qgtss\"\nI0902 13:43:03.809028       1 pv_controller.go:1340] isVolumeReleased[pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c]: volume is released\nI0902 13:43:03.809198       1 pv_controller.go:1340] isVolumeReleased[pvc-ef900183-85b4-4b3f-8c8c-8f580707e704]: volume is released\nI0902 13:43:03.822949       1 pv_controller.go:879] volume \"local-qgtss\" entered phase \"Bound\"\nI0902 13:43:03.822982       1 pv_controller.go:982] volume \"local-qgtss\" bound to claim \"provisioning-8498/pvc-6dnls\"\nI0902 13:43:03.835703       1 pv_controller.go:823] claim \"provisioning-8498/pvc-6dnls\" entered phase \"Bound\"\nI0902 13:43:03.837166       1 event.go:294] \"Event occurred\" object=\"volume-expand-4808/awslpqx8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:43:03.837195       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/pvc-c8v6t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE0902 13:43:03.885165       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-9969-markers/default: secrets \"default-token-fqgls\" is forbidden: unable to create new content in namespace webhook-9969-markers because it is being terminated\nI0902 13:43:03.990807       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176503\nI0902 13:43:04.163590       1 pv_controller.go:879] volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" entered phase \"Bound\"\nI0902 13:43:04.163734       1 pv_controller.go:982] volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" bound to claim \"mounted-volume-expand-9244/pvc-c8v6t\"\nI0902 13:43:04.176367       1 pv_controller.go:823] claim \"mounted-volume-expand-9244/pvc-c8v6t\" entered phase \"Bound\"\nI0902 13:43:04.678958       1 namespace_controller.go:185] Namespace has been deleted apply-650\nI0902 13:43:04.812384       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cab5c787f9ac3eb5\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:43:05.387251       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:05.388309       1 event.go:294] \"Event occurred\" object=\"cronjob-1731/successful-jobs-history-limit-27176503\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0902 13:43:05.394280       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:05.394753       1 event.go:294] \"Event occurred\" object=\"cronjob-1731/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: successful-jobs-history-limit-27176503, status: Complete\"\nI0902 13:43:05.400689       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-1731/successful-jobs-history-limit-27176502--1-d8klg\" objectUID=54f892a9-3209-4611-8746-016d4faf3545 kind=\"Pod\" virtual=false\nI0902 13:43:05.400898       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176502\nI0902 13:43:05.401401       1 event.go:294] \"Event occurred\" object=\"cronjob-1731/successful-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted job successful-jobs-history-limit-27176502\"\nI0902 13:43:05.404700       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-1731/successful-jobs-history-limit-27176502--1-d8klg\" objectUID=54f892a9-3209-4611-8746-016d4faf3545 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:05.480932       1 namespace_controller.go:185] Namespace has been deleted emptydir-7042\nI0902 13:43:05.749629       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [mygroup.example.com/v1beta1, Resource=footpp8cas], removed: []\nI0902 13:43:05.766013       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0902 13:43:05.768486       1 graph_builder.go:587] add [mygroup.example.com/v1beta1/footpp8ca, namespace: , name: canarymjws2, uid: b186f9e3-98bb-44df-b1cb-9d61e480824b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:05.775784       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8544/pvc-2sqk8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8544\\\" or manually created by system administrator\"\nI0902 13:43:05.788590       1 pv_controller.go:879] volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" entered phase \"Bound\"\nI0902 13:43:05.788630       1 pv_controller.go:982] volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" bound to claim \"csi-mock-volumes-8544/pvc-2sqk8\"\nI0902 13:43:05.795289       1 pv_controller.go:823] claim \"csi-mock-volumes-8544/pvc-2sqk8\" entered phase \"Bound\"\nI0902 13:43:05.815555       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02fdcca57b1c17075\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:05.815739       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-5411/pod-a5c56acd-e696-4fee-9961-7173c5a19532\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\\\" \"\nI0902 13:43:05.867737       1 shared_informer.go:247] Caches are synced for garbage collector \nI0902 13:43:05.867760       1 garbagecollector.go:254] synced garbage collector\nI0902 13:43:05.867802       1 garbagecollector.go:471] \"Processing object\" object=\"ownertc4bj\" objectUID=8f302718-38b7-4870-b5e2-57ca711eaa9c kind=\"footpp8ca\" virtual=true\nI0902 13:43:05.867931       1 garbagecollector.go:471] \"Processing object\" object=\"canarymjws2\" objectUID=b186f9e3-98bb-44df-b1cb-9d61e480824b kind=\"footpp8ca\" virtual=false\nI0902 13:43:05.869785       1 garbagecollector.go:471] \"Processing object\" object=\"dependentdcs6w\" objectUID=22a18484-f05e-4892-a306-3cab28b34a45 kind=\"footpp8ca\" virtual=false\nI0902 13:43:05.870163       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [mygroup.example.com/v1beta1/footpp8ca, namespace: , name: canarymjws2, uid: b186f9e3-98bb-44df-b1cb-9d61e480824b]\nI0902 13:43:05.871440       1 garbagecollector.go:580] \"Deleting object\" object=\"dependentdcs6w\" objectUID=22a18484-f05e-4892-a306-3cab28b34a45 kind=\"footpp8ca\" propagationPolicy=Background\nI0902 13:43:05.902841       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5753-1914\nI0902 13:43:06.047489       1 namespace_controller.go:185] Namespace has been deleted configmap-5562\nI0902 13:43:06.085234       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-9567/pvc-45v7w\"\nI0902 13:43:06.092905       1 pv_controller.go:640] volume \"local-mm5lj\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:06.095868       1 pv_controller.go:879] volume \"local-mm5lj\" entered phase \"Released\"\nI0902 13:43:06.155702       1 namespace_controller.go:185] Namespace has been deleted secrets-8268\nI0902 13:43:06.188049       1 namespace_controller.go:185] Namespace has been deleted kubectl-391\nI0902 13:43:06.198871       1 pv_controller_base.go:505] deletion of claim \"volumemode-9567/pvc-45v7w\" was already processed\nI0902 13:43:06.224635       1 namespace_controller.go:185] Namespace has been deleted provisioning-2170\nI0902 13:43:06.252866       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8544^4\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:06.399286       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0902 13:43:06.427089       1 event.go:294] \"Event occurred\" object=\"ephemeral-5806-4162/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE0902 13:43:06.569277       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-2392/default: secrets \"default-token-dbcmr\" is forbidden: unable to create new content in namespace kubectl-2392 because it is being terminated\nI0902 13:43:06.624559       1 namespace_controller.go:185] Namespace has been deleted dns-6205\nI0902 13:43:06.678083       1 pv_controller.go:1340] isVolumeReleased[pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c]: volume is released\nI0902 13:43:06.721418       1 event.go:294] \"Event occurred\" object=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-d6lct to be scheduled\"\nI0902 13:43:06.777677       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8544^4\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:06.777871       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8544/pvc-volume-tester-rrjrf\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\\\" \"\nI0902 13:43:06.801382       1 pv_controller_base.go:505] deletion of claim \"volume-expand-527/awswl9g2\" was already processed\nI0902 13:43:07.173887       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"ebs.csi.aws.com-vol-0d2a0d4e4f14345ac\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d2a0d4e4f14345ac\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:07.223276       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6aa9b22d-7cb5-4b2f-a33c-438aae5ad35c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05d4c067cc3b9501f\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:43:07.323143       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-7334/pvc-p8rts\" was already processed\nI0902 13:43:07.363762       1 namespace_controller.go:185] Namespace has been deleted pods-9015\nI0902 13:43:08.565718       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-6135\nI0902 13:43:08.658200       1 event.go:294] \"Event occurred\" object=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-5806\\\" or manually created by system administrator\"\nI0902 13:43:08.662173       1 event.go:294] \"Event occurred\" object=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-5806\\\" or manually created by system administrator\"\nI0902 13:43:08.794078       1 pv_controller.go:879] volume \"hostpath-gh647\" entered phase \"Available\"\nI0902 13:43:08.860595       1 namespace_controller.go:185] Namespace has been deleted webhook-9969\nI0902 13:43:08.884617       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-7896/pvc-vhnpd\"\nI0902 13:43:08.893175       1 pv_controller.go:640] volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:43:08.900585       1 pv_controller.go:879] volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" entered phase \"Released\"\nI0902 13:43:08.906593       1 pv_controller.go:1340] isVolumeReleased[pvc-28406966-1bdb-4197-82a3-6c80aec734f7]: volume is released\nI0902 13:43:08.976646       1 namespace_controller.go:185] Namespace has been deleted webhook-9969-markers\nI0902 13:43:09.035383       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-1731/successful-jobs-history-limit-27176503\" objectUID=51759f9d-7a7b-4329-9b39-c264715b15f4 kind=\"Job\" virtual=false\nI0902 13:43:09.038984       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-1731/successful-jobs-history-limit-27176503\" objectUID=51759f9d-7a7b-4329-9b39-c264715b15f4 kind=\"Job\" propagationPolicy=Background\nI0902 13:43:09.041797       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-1731/successful-jobs-history-limit-27176503--1-fmb4j\" objectUID=4514d30c-22e7-4053-ab5b-93989a90a6e6 kind=\"Pod\" virtual=false\nI0902 13:43:09.041851       1 job_controller.go:406] enqueueing job cronjob-1731/successful-jobs-history-limit-27176503\nI0902 13:43:09.043459       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-1731/successful-jobs-history-limit-27176503--1-fmb4j\" objectUID=4514d30c-22e7-4053-ab5b-93989a90a6e6 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:09.672512       1 pv_controller.go:879] volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" entered phase \"Bound\"\nI0902 13:43:09.672548       1 pv_controller.go:982] volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" bound to claim \"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\"\nI0902 13:43:09.683140       1 pv_controller.go:823] claim \"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" entered phase \"Bound\"\nI0902 13:43:09.717996       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7896^4\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:43:09.721843       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7896^4\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nE0902 13:43:09.819359       1 namespace_controller.go:162] deletion of namespace kubectl-1430 failed: unexpected items still remain in namespace: kubectl-1430 for gvr: /v1, Resource=pods\nE0902 13:43:09.912181       1 pv_protection_controller.go:118] PV pvc-28406966-1bdb-4197-82a3-6c80aec734f7 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\": the object has been modified; please apply your changes to the latest version and try again\nI0902 13:43:09.916096       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-7896/pvc-vhnpd\" was already processed\nI0902 13:43:10.065224       1 namespace_controller.go:185] Namespace has been deleted volume-4754\nI0902 13:43:10.232877       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-28406966-1bdb-4197-82a3-6c80aec734f7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7896^4\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:43:10.323837       1 pv_controller.go:879] volume \"local-pv4kwx7\" entered phase \"Available\"\nI0902 13:43:10.432433       1 pv_controller.go:930] claim \"persistent-local-volumes-test-3631/pvc-p82rr\" bound to volume \"local-pv4kwx7\"\nI0902 13:43:10.439147       1 pv_controller.go:879] volume \"local-pv4kwx7\" entered phase \"Bound\"\nI0902 13:43:10.439176       1 pv_controller.go:982] volume \"local-pv4kwx7\" bound to claim \"persistent-local-volumes-test-3631/pvc-p82rr\"\nI0902 13:43:10.446300       1 pv_controller.go:823] claim \"persistent-local-volumes-test-3631/pvc-p82rr\" entered phase \"Bound\"\nI0902 13:43:10.704729       1 namespace_controller.go:185] Namespace has been deleted volume-1457\nI0902 13:43:10.730427       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5806^b5ca650f-0bf3-11ec-955d-9a42dd389afb\") from node \"ip-172-20-45-138.eu-central-1.compute.internal\" \nI0902 13:43:11.252687       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5806^b5ca650f-0bf3-11ec-955d-9a42dd389afb\") from node \"ip-172-20-45-138.eu-central-1.compute.internal\" \nI0902 13:43:11.253065       1 event.go:294] \"Event occurred\" object=\"ephemeral-5806/inline-volume-tester-d6lct\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\\\" \"\nI0902 13:43:11.535914       1 namespace_controller.go:185] Namespace has been deleted flexvolume-5110\nI0902 13:43:11.651329       1 namespace_controller.go:185] Namespace has been deleted kubectl-2392\nI0902 13:43:12.262201       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cab5c787f9ac3eb5\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:43:12.262393       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bd5sjk\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\\\" \"\nI0902 13:43:12.293281       1 namespace_controller.go:185] Namespace has been deleted configmap-2329\nI0902 13:43:12.329926       1 namespace_controller.go:185] Namespace has been deleted volume-9897\nI0902 13:43:12.878713       1 pv_controller.go:879] volume \"local-pvx9fzh\" entered phase \"Available\"\nI0902 13:43:12.986177       1 pv_controller.go:930] claim \"persistent-local-volumes-test-6102/pvc-bkh78\" bound to volume \"local-pvx9fzh\"\nI0902 13:43:12.999517       1 pv_controller.go:879] volume \"local-pvx9fzh\" entered phase \"Bound\"\nI0902 13:43:12.999688       1 pv_controller.go:982] volume \"local-pvx9fzh\" bound to claim \"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:13.014934       1 pv_controller.go:823] claim \"persistent-local-volumes-test-6102/pvc-bkh78\" entered phase \"Bound\"\nE0902 13:43:13.099119       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-3908/default: secrets \"default-token-qrgfr\" is forbidden: unable to create new content in namespace configmap-3908 because it is being terminated\nI0902 13:43:13.442935       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:43:13.477071       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nE0902 13:43:13.678842       1 tokens_controller.go:262] error synchronizing serviceaccount kubelet-test-6468/default: secrets \"default-token-k4ffl\" is forbidden: unable to create new content in namespace kubelet-test-6468 because it is being terminated\nI0902 13:43:13.973247       1 namespace_controller.go:185] Namespace has been deleted security-context-8458\nE0902 13:43:14.410034       1 tokens_controller.go:262] error synchronizing serviceaccount cronjob-1731/default: secrets \"default-token-rrfwx\" is forbidden: unable to create new content in namespace cronjob-1731 because it is being terminated\nE0902 13:43:14.514229       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-7334/default: secrets \"default-token-qlt47\" is forbidden: unable to create new content in namespace csi-mock-volumes-7334 because it is being terminated\nE0902 13:43:14.561974       1 tokens_controller.go:262] error synchronizing serviceaccount pv-protection-2810/default: secrets \"default-token-7sb5m\" is forbidden: unable to create new content in namespace pv-protection-2810 because it is being terminated\nE0902 13:43:14.799796       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-527/default: secrets \"default-token-8hn29\" is forbidden: unable to create new content in namespace volume-expand-527 because it is being terminated\nE0902 13:43:15.137220       1 tokens_controller.go:262] error synchronizing serviceaccount volumelimits-8270/default: secrets \"default-token-prbk6\" is forbidden: unable to create new content in namespace volumelimits-8270 because it is being terminated\nI0902 13:43:15.247880       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-8544/pvc-2sqk8\"\nI0902 13:43:15.261661       1 pv_controller.go:640] volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:43:15.274620       1 pv_controller.go:879] volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" entered phase \"Released\"\nI0902 13:43:15.285598       1 pv_controller.go:1340] isVolumeReleased[pvc-ac19b6b7-9679-4e2d-b519-819ce8682482]: volume is released\nI0902 13:43:15.304986       1 pv_controller.go:879] volume \"local-pvvpm8w\" entered phase \"Available\"\nI0902 13:43:15.408226       1 pv_controller.go:930] claim \"persistent-local-volumes-test-599/pvc-92fsd\" bound to volume \"local-pvvpm8w\"\nI0902 13:43:15.429995       1 pv_controller.go:879] volume \"local-pvvpm8w\" entered phase \"Bound\"\nI0902 13:43:15.430026       1 pv_controller.go:982] volume \"local-pvvpm8w\" bound to claim \"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:15.441833       1 pv_controller.go:823] claim \"persistent-local-volumes-test-599/pvc-92fsd\" entered phase \"Bound\"\nI0902 13:43:15.730342       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:15.730635       1 event.go:294] \"Event occurred\" object=\"volume-3640/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-whgq2\\\" \"\nI0902 13:43:16.207343       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-5806, name: inline-volume-tester-d6lct, uid: 5a854bbc-fece-45e7-a62a-fb0a8c0c3daa] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:16.207555       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" objectUID=e34d427f-5384-43e9-aa09-c9d6070d5536 kind=\"PersistentVolumeClaim\" virtual=false\nI0902 13:43:16.208336       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-tester-d6lct\" objectUID=5a854bbc-fece-45e7-a62a-fb0a8c0c3daa kind=\"Pod\" virtual=false\nI0902 13:43:16.208293       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-tester-d6lct\" objectUID=df30bcc6-a94f-4cc8-ad3c-ebc618172368 kind=\"CiliumEndpoint\" virtual=false\nE0902 13:43:16.219431       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:16.224919       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-5806, name: inline-volume-tester-d6lct, uid: df30bcc6-a94f-4cc8-ad3c-ebc618172368] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-5806, name: inline-volume-tester-d6lct, uid: 5a854bbc-fece-45e7-a62a-fb0a8c0c3daa] is deletingDependents\nI0902 13:43:16.225041       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-5806, name: inline-volume-tester-d6lct-my-volume-0, uid: e34d427f-5384-43e9-aa09-c9d6070d5536] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-5806, name: inline-volume-tester-d6lct, uid: 5a854bbc-fece-45e7-a62a-fb0a8c0c3daa] is deletingDependents\nI0902 13:43:16.231646       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" objectUID=e34d427f-5384-43e9-aa09-c9d6070d5536 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0902 13:43:16.231939       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5806/inline-volume-tester-d6lct\" objectUID=df30bcc6-a94f-4cc8-ad3c-ebc618172368 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:16.243341       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-tester-d6lct\" objectUID=5a854bbc-fece-45e7-a62a-fb0a8c0c3daa kind=\"Pod\" virtual=false\nI0902 13:43:16.248464       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-tester-d6lct\" objectUID=df30bcc6-a94f-4cc8-ad3c-ebc618172368 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:16.255243       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-5806, name: inline-volume-tester-d6lct-my-volume-0, uid: e34d427f-5384-43e9-aa09-c9d6070d5536] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-5806, name: inline-volume-tester-d6lct, uid: 5a854bbc-fece-45e7-a62a-fb0a8c0c3daa] is deletingDependents\nI0902 13:43:16.263019       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-5806/inline-volume-tester-d6lct\" PVC=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\"\nI0902 13:43:16.263138       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\"\nI0902 13:43:16.263697       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" objectUID=e34d427f-5384-43e9-aa09-c9d6070d5536 kind=\"PersistentVolumeClaim\" virtual=false\nW0902 13:43:16.270263       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:16.363715       1 namespace_controller.go:185] Namespace has been deleted volumelimits-2455\nI0902 13:43:16.460794       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-9188/service-headless\" need=3 creating=3\nW0902 13:43:16.563007       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:16.598686       1 event.go:294] \"Event occurred\" object=\"services-9188/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-njsj9\"\nI0902 13:43:16.655128       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8498/pvc-6dnls\"\nI0902 13:43:16.655747       1 event.go:294] \"Event occurred\" object=\"services-9188/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-gpr62\"\nW0902 13:43:16.661283       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:16.666140       1 event.go:294] \"Event occurred\" object=\"services-9188/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-w5cvz\"\nI0902 13:43:16.711231       1 pv_controller.go:640] volume \"local-qgtss\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:16.743349       1 pv_controller.go:879] volume \"local-qgtss\" entered phase \"Released\"\nI0902 13:43:16.761373       1 pv_controller_base.go:505] deletion of claim \"provisioning-8498/pvc-6dnls\" was already processed\nE0902 13:43:16.793914       1 tokens_controller.go:262] error synchronizing serviceaccount volume-126/default: secrets \"default-token-7vq6v\" is forbidden: unable to create new content in namespace volume-126 because it is being terminated\nI0902 13:43:16.804262       1 garbagecollector.go:471] \"Processing object\" object=\"volumelimits-2455-4539/csi-hostpathplugin-f8c669df9\" objectUID=d9754ca3-fe32-4006-b029-54fa328e44fe kind=\"ControllerRevision\" virtual=false\nI0902 13:43:16.804604       1 stateful_set.go:440] StatefulSet has been deleted volumelimits-2455-4539/csi-hostpathplugin\nI0902 13:43:16.804698       1 garbagecollector.go:471] \"Processing object\" object=\"volumelimits-2455-4539/csi-hostpathplugin-0\" objectUID=93cd8fbf-1821-44c1-8d7a-afc13fae4966 kind=\"Pod\" virtual=false\nI0902 13:43:16.808871       1 garbagecollector.go:580] \"Deleting object\" object=\"volumelimits-2455-4539/csi-hostpathplugin-0\" objectUID=93cd8fbf-1821-44c1-8d7a-afc13fae4966 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:16.809414       1 garbagecollector.go:580] \"Deleting object\" object=\"volumelimits-2455-4539/csi-hostpathplugin-f8c669df9\" objectUID=d9754ca3-fe32-4006-b029-54fa328e44fe kind=\"ControllerRevision\" propagationPolicy=Background\nE0902 13:43:16.977112       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-7264/default: secrets \"default-token-ch2zq\" is forbidden: unable to create new content in namespace kubectl-7264 because it is being terminated\nI0902 13:43:17.040533       1 resource_quota_controller.go:307] Resource quota has been deleted kubectl-7264/million\nI0902 13:43:17.207925       1 event.go:294] \"Event occurred\" object=\"volume-expand-4808/awslpqx8\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:43:17.215115       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-4808/awslpqx8\"\nI0902 13:43:17.581518       1 stateful_set_control.go:555] StatefulSet statefulset-9084/ss2 terminating Pod ss2-2 for update\nI0902 13:43:17.587171       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nE0902 13:43:17.610356       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0902 13:43:17.921803       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:18.220666       1 namespace_controller.go:185] Namespace has been deleted configmap-3908\nI0902 13:43:18.276899       1 namespace_controller.go:185] Namespace has been deleted volumemode-9567\nI0902 13:43:18.798042       1 pv_controller.go:1340] isVolumeReleased[pvc-ac19b6b7-9679-4e2d-b519-819ce8682482]: volume is released\nW0902 13:43:18.930091       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:19.026649       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-549dfcd4c8\" objectUID=13e988bc-8c6d-40b6-b668-38a85904d95e kind=\"ControllerRevision\" virtual=false\nI0902 13:43:19.027283       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7334-6013/csi-mockplugin\nI0902 13:43:19.027346       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-0\" objectUID=ba678141-363c-426e-8aed-2340e0cfaed2 kind=\"Pod\" virtual=false\nI0902 13:43:19.029658       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-549dfcd4c8\" objectUID=13e988bc-8c6d-40b6-b668-38a85904d95e kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:43:19.030066       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-0\" objectUID=ba678141-363c-426e-8aed-2340e0cfaed2 kind=\"Pod\" propagationPolicy=Background\nW0902 13:43:19.080322       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:19.213711       1 namespace_controller.go:185] Namespace has been deleted job-9294\nI0902 13:43:19.283077       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-attacher-86986686c8\" objectUID=91a24db3-b94a-4e5c-b032-02715b10a852 kind=\"ControllerRevision\" virtual=false\nI0902 13:43:19.283280       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7334-6013/csi-mockplugin-attacher\nI0902 13:43:19.283407       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-attacher-0\" objectUID=68a771e9-108b-4865-8ad5-230f2d8b33c0 kind=\"Pod\" virtual=false\nI0902 13:43:19.285755       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-attacher-86986686c8\" objectUID=91a24db3-b94a-4e5c-b032-02715b10a852 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:43:19.285922       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7334-6013/csi-mockplugin-attacher-0\" objectUID=68a771e9-108b-4865-8ad5-230f2d8b33c0 kind=\"Pod\" propagationPolicy=Background\nE0902 13:43:19.398839       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:19.426950       1 garbagecollector.go:471] \"Processing object\" object=\"dns-6606/dns-test-0d211de5-cba7-4b5d-860f-77ced4d4e7b7\" objectUID=d2cf0d86-b571-4f9e-9f7e-9f64cd18f2dd kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:19.447734       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-6606/dns-test-0d211de5-cba7-4b5d-860f-77ced4d4e7b7\" objectUID=d2cf0d86-b571-4f9e-9f7e-9f64cd18f2dd kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:19.484769       1 namespace_controller.go:185] Namespace has been deleted cronjob-1731\nI0902 13:43:19.516453       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/pvc-c8v6t\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalExpanding\" message=\"CSI migration enabled for kubernetes.io/aws-ebs; waiting for external resizer to expand the pvc\"\nI0902 13:43:19.718737       1 namespace_controller.go:185] Namespace has been deleted pv-protection-2810\nI0902 13:43:19.729718       1 namespace_controller.go:185] Namespace has been deleted crictl-4863\nI0902 13:43:19.757961       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7334\nI0902 13:43:19.795121       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8544^4\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:19.797742       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8544^4\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:19.930373       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4535/httpd\" objectUID=cfa672fe-3314-445c-82f2-218852cdcc97 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:19.946825       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4535/httpd\" objectUID=cfa672fe-3314-445c-82f2-218852cdcc97 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:19.976883       1 namespace_controller.go:185] Namespace has been deleted volume-expand-527\nI0902 13:43:20.253036       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0902 13:43:20.333984       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ac19b6b7-9679-4e2d-b519-819ce8682482\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8544^4\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:20.342983       1 namespace_controller.go:185] Namespace has been deleted volumelimits-8270\nI0902 13:43:20.556910       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3631/pod-555cffec-c78b-468f-92c4-c463fbf0f0fe\" PVC=\"persistent-local-volumes-test-3631/pvc-p82rr\"\nI0902 13:43:20.556941       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3631/pvc-p82rr\"\nI0902 13:43:20.749348       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-6d4d4486f6\" objectUID=833614a8-239c-4349-8c86-11cf40aeb000 kind=\"ControllerRevision\" virtual=false\nI0902 13:43:20.749632       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7896-96/csi-mockplugin\nI0902 13:43:20.749665       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-0\" objectUID=dd6c7195-6d8e-47a3-9c3f-8b316321f46f kind=\"Pod\" virtual=false\nI0902 13:43:20.754258       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-6d4d4486f6\" objectUID=833614a8-239c-4349-8c86-11cf40aeb000 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:43:20.754599       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-0\" objectUID=dd6c7195-6d8e-47a3-9c3f-8b316321f46f kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:20.839383       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-599/pod-2b7aab76-a5bb-444b-afe2-cb629d9fec71\" PVC=\"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:20.839553       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:21.022422       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-attacher-6d7874897b\" objectUID=cf641143-9714-4426-8089-dfa4daaba21d kind=\"ControllerRevision\" virtual=false\nI0902 13:43:21.022547       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-7896-96/csi-mockplugin-attacher\nI0902 13:43:21.022675       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-attacher-0\" objectUID=b1868d8b-d62b-4d95-9136-8de74e121452 kind=\"Pod\" virtual=false\nI0902 13:43:21.026329       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-attacher-6d7874897b\" objectUID=cf641143-9714-4426-8089-dfa4daaba21d kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:43:21.026465       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-7896-96/csi-mockplugin-attacher-0\" objectUID=b1868d8b-d62b-4d95-9136-8de74e121452 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:21.084188       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-599/pod-2b7aab76-a5bb-444b-afe2-cb629d9fec71\" PVC=\"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:21.084368       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:21.391659       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-599/pod-2b7aab76-a5bb-444b-afe2-cb629d9fec71\" PVC=\"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:21.393340       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:21.401538       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-599/pvc-92fsd\"\nI0902 13:43:21.409294       1 pv_controller.go:640] volume \"local-pvvpm8w\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:21.413708       1 pv_controller.go:879] volume \"local-pvvpm8w\" entered phase \"Released\"\nI0902 13:43:21.427301       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-599/pvc-92fsd\" was already processed\nW0902 13:43:21.523761       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:22.234382       1 namespace_controller.go:185] Namespace has been deleted kubectl-7264\nE0902 13:43:22.258066       1 tokens_controller.go:262] error synchronizing serviceaccount volumelimits-2455-4539/default: secrets \"default-token-dbjj8\" is forbidden: unable to create new content in namespace volumelimits-2455-4539 because it is being terminated\nI0902 13:43:22.284099       1 namespace_controller.go:185] Namespace has been deleted volume-126\nI0902 13:43:22.303690       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-8544/pvc-2sqk8\" was already processed\nI0902 13:43:22.303908       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7896\nI0902 13:43:22.437207       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6102/pod-dbbfa4d9-f9ca-40f3-9ce6-03147e6ba383\" PVC=\"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:22.437233       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:22.522918       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3631/pod-555cffec-c78b-468f-92c4-c463fbf0f0fe\" PVC=\"persistent-local-volumes-test-3631/pvc-p82rr\"\nI0902 13:43:22.522945       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3631/pvc-p82rr\"\nW0902 13:43:22.532268       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:22.723455       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3631/pod-555cffec-c78b-468f-92c4-c463fbf0f0fe\" PVC=\"persistent-local-volumes-test-3631/pvc-p82rr\"\nI0902 13:43:22.723622       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3631/pvc-p82rr\"\nI0902 13:43:22.728242       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3631/pvc-p82rr\"\nI0902 13:43:22.734732       1 pv_controller.go:640] volume \"local-pv4kwx7\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:22.740254       1 pv_controller.go:879] volume \"local-pv4kwx7\" entered phase \"Released\"\nI0902 13:43:22.741919       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-3631/pvc-p82rr\" was already processed\nE0902 13:43:22.923311       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-4/default: secrets \"default-token-gj8kk\" is forbidden: unable to create new content in namespace secrets-4 because it is being terminated\nE0902 13:43:22.924401       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8498/default: secrets \"default-token-s89tc\" is forbidden: unable to create new content in namespace provisioning-8498 because it is being terminated\nI0902 13:43:22.957222       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-9188/service-headless-toggled\" need=3 creating=3\nI0902 13:43:22.963150       1 event.go:294] \"Event occurred\" object=\"services-9188/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-r5zwc\"\nI0902 13:43:22.976841       1 event.go:294] \"Event occurred\" object=\"services-9188/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-rb2f6\"\nI0902 13:43:22.976870       1 event.go:294] \"Event occurred\" object=\"services-9188/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-pnghk\"\nE0902 13:43:23.004491       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:24.064901       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558b\" need=1 creating=1\nI0902 13:43:24.069414       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bd5sjk\" objectUID=5b30be94-1e93-4f71-8e42-efa2af2fe15c kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:24.074059       1 event.go:294] \"Event occurred\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bbs2hd\"\nI0902 13:43:24.077756       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bd5sjk\" objectUID=5b30be94-1e93-4f71-8e42-efa2af2fe15c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:24.268114       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:43:24.272599       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0902 13:43:24.282488       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0902 13:43:24.297518       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:43:25.343999       1 namespace_controller.go:185] Namespace has been deleted disruption-5704\nI0902 13:43:25.676835       1 namespace_controller.go:185] Namespace has been deleted gc-8409\nE0902 13:43:26.340637       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-7896-96/default: secrets \"default-token-lljm6\" is forbidden: unable to create new content in namespace csi-mock-volumes-7896-96 because it is being terminated\nI0902 13:43:26.907370       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-9323/httpd-deployment-8584777d8\" need=1 creating=1\nI0902 13:43:26.908431       1 event.go:294] \"Event occurred\" object=\"kubectl-9323/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-8584777d8 to 1\"\nI0902 13:43:26.921192       1 event.go:294] \"Event occurred\" object=\"kubectl-9323/httpd-deployment-8584777d8\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-8584777d8-lxk2k\"\nI0902 13:43:26.928918       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-9323/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:43:26.949488       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-9323/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:43:26.962040       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-9323/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:43:27.010947       1 event.go:294] \"Event occurred\" object=\"volume-expand-2642-3929/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0902 13:43:27.326701       1 event.go:294] \"Event occurred\" object=\"volume-expand-2642/csi-hostpathjjr5w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-2642\\\" or manually created by system administrator\"\nI0902 13:43:27.373197       1 namespace_controller.go:185] Namespace has been deleted volumelimits-2455-4539\nE0902 13:43:27.525703       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-3631/default: secrets \"default-token-dfv67\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-3631 because it is being terminated\nI0902 13:43:27.667516       1 pv_controller.go:879] volume \"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\" entered phase \"Bound\"\nI0902 13:43:27.668074       1 pv_controller.go:982] volume \"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\" bound to claim \"statefulset-6091/datadir-ss-0\"\nI0902 13:43:27.678352       1 pv_controller.go:823] claim \"statefulset-6091/datadir-ss-0\" entered phase \"Bound\"\nI0902 13:43:27.688596       1 namespace_controller.go:185] Namespace has been deleted volume-expand-4808\nE0902 13:43:27.882050       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8133/pvc-5m6x2: storageclass.storage.k8s.io \"provisioning-8133\" not found\nI0902 13:43:27.882312       1 event.go:294] \"Event occurred\" object=\"provisioning-8133/pvc-5m6x2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8133\\\" not found\"\nI0902 13:43:27.997860       1 pv_controller.go:879] volume \"local-v84f2\" entered phase \"Available\"\nI0902 13:43:28.074401       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-9323/httpd-deployment-8584777d8\" objectUID=3dbc316d-9cf5-4102-aaaa-65c7fa77d3ca kind=\"ReplicaSet\" virtual=false\nI0902 13:43:28.074836       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-9323/httpd-deployment\"\nI0902 13:43:28.077274       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-9323/httpd-deployment-8584777d8\" objectUID=3dbc316d-9cf5-4102-aaaa-65c7fa77d3ca kind=\"ReplicaSet\" propagationPolicy=Background\nI0902 13:43:28.080057       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-9323/httpd-deployment-8584777d8-lxk2k\" objectUID=951001e4-4c11-4dd4-b497-819aceb91938 kind=\"Pod\" virtual=false\nI0902 13:43:28.086421       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-9323/httpd-deployment-8584777d8-lxk2k\" objectUID=951001e4-4c11-4dd4-b497-819aceb91938 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:28.154104       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-9084/ss2-0\" objectUID=d0372a2b-a85d-4e23-a37e-0834f79e45e6 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:28.159386       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-9084/ss2-0\" objectUID=d0372a2b-a85d-4e23-a37e-0834f79e45e6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:28.171042       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0902 13:43:28.177578       1 namespace_controller.go:185] Namespace has been deleted secrets-4\nI0902 13:43:28.225011       1 namespace_controller.go:185] Namespace has been deleted provisioning-8498\nI0902 13:43:28.275926       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-9084/ss2-2\" objectUID=d434708b-a3f2-4af0-8f62-afa966c2fb36 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:28.282676       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-9084/ss2-2\" objectUID=d434708b-a3f2-4af0-8f62-afa966c2fb36 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:28.382709       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-002a09058d0b38e43\") from node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nE0902 13:43:29.079724       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-8544/default: secrets \"default-token-fnzkb\" is forbidden: unable to create new content in namespace csi-mock-volumes-8544 because it is being terminated\nI0902 13:43:29.655418       1 namespace_controller.go:185] Namespace has been deleted podtemplate-2750\nI0902 13:43:29.692685       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7334-6013\nI0902 13:43:29.961947       1 namespace_controller.go:185] Namespace has been deleted dns-6606\nI0902 13:43:30.645008       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-002a09058d0b38e43\") from node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:43:30.645239       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\\\" \"\nI0902 13:43:30.675371       1 namespace_controller.go:185] Namespace has been deleted apparmor-9395\nI0902 13:43:30.734770       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0902 13:43:30.801354       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:30.876710       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-6190/rs\" need=10 creating=10\nI0902 13:43:30.883548       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-bn2js\"\nI0902 13:43:30.890601       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-v6fzl\"\nI0902 13:43:30.894110       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-c8dt7\"\nI0902 13:43:30.905871       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-fdrh4\"\nI0902 13:43:30.910783       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-wt2r8\"\nI0902 13:43:30.915156       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-5sps2\"\nI0902 13:43:30.916316       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-9hb4b\"\nI0902 13:43:30.928339       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-hf6x5\"\nI0902 13:43:30.928773       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-wcm9w\"\nI0902 13:43:30.939381       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-x4hkz\"\nI0902 13:43:31.666117       1 namespace_controller.go:185] Namespace has been deleted kubectl-4535\nE0902 13:43:31.687856       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:32.434157       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6102/pod-dbbfa4d9-f9ca-40f3-9ce6-03147e6ba383\" PVC=\"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:32.434222       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:32.692517       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3631\nI0902 13:43:32.839513       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6102/pod-dbbfa4d9-f9ca-40f3-9ce6-03147e6ba383\" PVC=\"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:32.839545       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:32.844798       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-6102/pvc-bkh78\"\nI0902 13:43:32.862314       1 pv_controller.go:640] volume \"local-pvx9fzh\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:32.867499       1 pv_controller.go:879] volume \"local-pvx9fzh\" entered phase \"Released\"\nI0902 13:43:32.872916       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-6102/pvc-bkh78\" was already processed\nI0902 13:43:33.344070       1 pv_controller.go:879] volume \"pvc-2f180fd2-f137-423c-a333-4e38635d6b2b\" entered phase \"Bound\"\nI0902 13:43:33.344112       1 pv_controller.go:982] volume \"pvc-2f180fd2-f137-423c-a333-4e38635d6b2b\" bound to claim \"volume-expand-2642/csi-hostpathjjr5w\"\nI0902 13:43:33.352292       1 pv_controller.go:823] claim \"volume-expand-2642/csi-hostpathjjr5w\" entered phase \"Bound\"\nI0902 13:43:33.391253       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-7b5f9dd45b\" objectUID=7f2f614a-2220-4984-80bf-9da5b28dbcda kind=\"ControllerRevision\" virtual=false\nI0902 13:43:33.391425       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-0\" objectUID=ec83f55f-803c-4d1c-b36a-6d0a578da582 kind=\"Pod\" virtual=false\nI0902 13:43:33.391267       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-8544-6297/csi-mockplugin\nI0902 13:43:33.398070       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-7b5f9dd45b\" objectUID=7f2f614a-2220-4984-80bf-9da5b28dbcda kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:43:33.398217       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-0\" objectUID=ec83f55f-803c-4d1c-b36a-6d0a578da582 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:33.457370       1 namespace_controller.go:185] Namespace has been deleted apf-8482\nE0902 13:43:33.547146       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-9323/default: secrets \"default-token-9vf9g\" is forbidden: unable to create new content in namespace kubectl-9323 because it is being terminated\nI0902 13:43:33.610636       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-attacher-8585ff4884\" objectUID=eead2ed1-bddf-4ef3-87c5-16208275ad40 kind=\"ControllerRevision\" virtual=false\nI0902 13:43:33.610993       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-8544-6297/csi-mockplugin-attacher\nI0902 13:43:33.611032       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-attacher-0\" objectUID=35d9c5be-22fd-4847-84fb-886ca66975ad kind=\"Pod\" virtual=false\nI0902 13:43:33.612952       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-attacher-8585ff4884\" objectUID=eead2ed1-bddf-4ef3-87c5-16208275ad40 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:43:33.613492       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8544-6297/csi-mockplugin-attacher-0\" objectUID=35d9c5be-22fd-4847-84fb-886ca66975ad kind=\"Pod\" propagationPolicy=Background\nE0902 13:43:33.766708       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:33.796385       1 pv_controller.go:930] claim \"provisioning-8133/pvc-5m6x2\" bound to volume \"local-v84f2\"\nI0902 13:43:33.806942       1 pv_controller.go:879] volume \"local-v84f2\" entered phase \"Bound\"\nI0902 13:43:33.806976       1 pv_controller.go:982] volume \"local-v84f2\" bound to claim \"provisioning-8133/pvc-5m6x2\"\nI0902 13:43:33.813601       1 pv_controller.go:823] claim \"provisioning-8133/pvc-5m6x2\" entered phase \"Bound\"\nI0902 13:43:34.221723       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8544\nE0902 13:43:34.385605       1 tokens_controller.go:262] error synchronizing serviceaccount projected-29/default: secrets \"default-token-cdfp8\" is forbidden: unable to create new content in namespace projected-29 because it is being terminated\nI0902 13:43:34.504829       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-599\nI0902 13:43:34.847681       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558b\" objectUID=f4f53793-8e34-428e-9a82-9b7d1ae162e8 kind=\"ReplicaSet\" virtual=false\nI0902 13:43:34.847911       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a\"\nI0902 13:43:34.849862       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558b\" objectUID=f4f53793-8e34-428e-9a82-9b7d1ae162e8 kind=\"ReplicaSet\" propagationPolicy=Background\nI0902 13:43:34.852401       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bbs2hd\" objectUID=f5e9f571-b3bd-463d-abf0-2573109677c4 kind=\"Pod\" virtual=false\nI0902 13:43:34.854314       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bbs2hd\" objectUID=f5e9f571-b3bd-463d-abf0-2573109677c4 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:34.863032       1 garbagecollector.go:471] \"Processing object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bbs2hd\" objectUID=368cf8ad-e42e-4acc-b3df-7fbb91d0fabd kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:34.865518       1 garbagecollector.go:580] \"Deleting object\" object=\"mounted-volume-expand-9244/deployment-608e86e2-8363-4935-8a08-43843864394a-6777d6558bbs2hd\" objectUID=368cf8ad-e42e-4acc-b3df-7fbb91d0fabd kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:35.180893       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"mounted-volume-expand-9244/pvc-c8v6t\"\nI0902 13:43:35.190268       1 pv_controller.go:640] volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:43:35.194051       1 pv_controller.go:879] volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" entered phase \"Released\"\nI0902 13:43:35.198985       1 pv_controller.go:1340] isVolumeReleased[pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638]: volume is released\nI0902 13:43:35.877912       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [mygroup.example.com/v1beta1, Resource=footpp8cas]\nI0902 13:43:35.878000       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0902 13:43:35.878050       1 shared_informer.go:247] Caches are synced for garbage collector \nI0902 13:43:35.878056       1 garbagecollector.go:254] synced garbage collector\nI0902 13:43:36.097703       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0902 13:43:36.687704       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-7896-96\nI0902 13:43:37.148593       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0902 13:43:37.610822       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-4357/httpd\" objectUID=c11391d6-8061-4f10-b5bf-4c539a0bc259 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:37.650066       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-4357/httpd\" objectUID=c11391d6-8061-4f10-b5bf-4c539a0bc259 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0902 13:43:37.866843       1 tokens_controller.go:262] error synchronizing serviceaccount watch-8419/default: secrets \"default-token-bzjm7\" is forbidden: unable to create new content in namespace watch-8419 because it is being terminated\nE0902 13:43:38.790776       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-8544-6297/default: secrets \"default-token-92tzq\" is forbidden: unable to create new content in namespace csi-mock-volumes-8544-6297 because it is being terminated\nI0902 13:43:38.972544       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0902 13:43:39.411431       1 namespace_controller.go:185] Namespace has been deleted projected-29\nE0902 13:43:39.486538       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:39.912353       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cab5c787f9ac3eb5\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:43:39.915085       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cab5c787f9ac3eb5\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nE0902 13:43:40.221306       1 tokens_controller.go:262] error synchronizing serviceaccount mounted-volume-expand-9244/default: secrets \"default-token-9j5gd\" is forbidden: unable to create new content in namespace mounted-volume-expand-9244 because it is being terminated\nI0902 13:43:40.289540       1 namespace_controller.go:185] Namespace has been deleted projected-7224\nE0902 13:43:40.301894       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:40.557756       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-6102\nE0902 13:43:42.341202       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-4456/pvc-wb926: storageclass.storage.k8s.io \"provisioning-4456\" not found\nI0902 13:43:42.341429       1 event.go:294] \"Event occurred\" object=\"provisioning-4456/pvc-wb926\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4456\\\" not found\"\nI0902 13:43:42.458611       1 pv_controller.go:879] volume \"local-knjnd\" entered phase \"Available\"\nI0902 13:43:43.028507       1 namespace_controller.go:185] Namespace has been deleted watch-8419\nE0902 13:43:43.175334       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0902 13:43:43.423672       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:43.721032       1 namespace_controller.go:185] Namespace has been deleted kubectl-9323\nE0902 13:43:43.910935       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4357/default: secrets \"default-token-pnsqx\" is forbidden: unable to create new content in namespace kubectl-4357 because it is being terminated\nI0902 13:43:45.276516       1 namespace_controller.go:185] Namespace has been deleted mounted-volume-expand-9244\nE0902 13:43:45.630156       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:45.651354       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-6190/rs\" need=10 creating=1\nI0902 13:43:45.659929       1 event.go:294] \"Event occurred\" object=\"disruption-6190/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-w5f8t\"\nI0902 13:43:46.359783       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0902 13:43:46.360310       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:43:46.367222       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0902 13:43:46.379382       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:43:46.381294       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:43:46.644522       1 pv_controller.go:1340] isVolumeReleased[pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638]: volume is released\nI0902 13:43:46.759294       1 garbagecollector.go:471] \"Processing object\" object=\"services-9188/verify-service-up-exec-pod-mvvgr\" objectUID=8f55af44-4312-4d4f-8e1a-c1adb508af60 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:46.766417       1 garbagecollector.go:580] \"Deleting object\" object=\"services-9188/verify-service-up-exec-pod-mvvgr\" objectUID=8f55af44-4312-4d4f-8e1a-c1adb508af60 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0902 13:43:46.779477       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-6958/pvc-7cxs5: storageclass.storage.k8s.io \"provisioning-6958\" not found\nI0902 13:43:46.779732       1 event.go:294] \"Event occurred\" object=\"provisioning-6958/pvc-7cxs5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6958\\\" not found\"\nI0902 13:43:46.782346       1 pv_controller_base.go:505] deletion of claim \"mounted-volume-expand-9244/pvc-c8v6t\" was already processed\nI0902 13:43:46.895641       1 pv_controller.go:879] volume \"local-hkwp6\" entered phase \"Available\"\nE0902 13:43:46.998719       1 tokens_controller.go:262] error synchronizing serviceaccount container-lifecycle-hook-1710/default: secrets \"default-token-9fctp\" is forbidden: unable to create new content in namespace container-lifecycle-hook-1710 because it is being terminated\nI0902 13:43:47.127835       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8133/pvc-5m6x2\"\nI0902 13:43:47.133097       1 pv_controller.go:640] volume \"local-v84f2\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:47.142317       1 pv_controller.go:879] volume \"local-v84f2\" entered phase \"Released\"\nI0902 13:43:47.173058       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-5806/inline-volume-tester-d6lct\" PVC=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\"\nI0902 13:43:47.173267       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\"\nI0902 13:43:47.184511       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\"\nI0902 13:43:47.190501       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806/inline-volume-tester-d6lct\" objectUID=5a854bbc-fece-45e7-a62a-fb0a8c0c3daa kind=\"Pod\" virtual=false\nI0902 13:43:47.193101       1 pv_controller.go:640] volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:43:47.193667       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-5806, name: inline-volume-tester-d6lct, uid: 5a854bbc-fece-45e7-a62a-fb0a8c0c3daa]\nI0902 13:43:47.196914       1 pv_controller.go:879] volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" entered phase \"Released\"\nI0902 13:43:47.199605       1 pv_controller.go:1340] isVolumeReleased[pvc-e34d427f-5384-43e9-aa09-c9d6070d5536]: volume is released\nI0902 13:43:47.210622       1 pv_controller_base.go:505] deletion of claim \"ephemeral-5806/inline-volume-tester-d6lct-my-volume-0\" was already processed\nI0902 13:43:47.241142       1 pv_controller_base.go:505] deletion of claim \"provisioning-8133/pvc-5m6x2\" was already processed\nI0902 13:43:47.336593       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2c07d0ca-dd3b-4b97-a9bf-308ba89d1638\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0cab5c787f9ac3eb5\") on node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nE0902 13:43:47.344909       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-8120/pvc-5gz4k: storageclass.storage.k8s.io \"volume-8120\" not found\nI0902 13:43:47.345041       1 event.go:294] \"Event occurred\" object=\"volume-8120/pvc-5gz4k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8120\\\" not found\"\nI0902 13:43:47.459997       1 pv_controller.go:879] volume \"local-k5pdh\" entered phase \"Available\"\nE0902 13:43:47.580443       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:47.738306       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-3640/pvc-6t5sw\"\nI0902 13:43:47.746086       1 pv_controller.go:640] volume \"aws-whgq2\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:47.754883       1 pv_controller.go:879] volume \"aws-whgq2\" entered phase \"Released\"\nI0902 13:43:47.807112       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"gc-9324/simpletest.rc\" need=10 creating=10\nI0902 13:43:47.811863       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-bgzkm\"\nI0902 13:43:47.821271       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-qj4rs\"\nI0902 13:43:47.824457       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-fcw6j\"\nI0902 13:43:47.838488       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-4x7pl\"\nI0902 13:43:47.839140       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-snmvh\"\nI0902 13:43:47.839296       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-6rf4w\"\nI0902 13:43:47.839373       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-77ktj\"\nI0902 13:43:47.852716       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-bvwmw\"\nI0902 13:43:47.853921       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-xxcvb\"\nI0902 13:43:47.854025       1 event.go:294] \"Event occurred\" object=\"gc-9324/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-42hsl\"\nI0902 13:43:47.989855       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-4014/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0902 13:43:47.990873       1 event.go:294] \"Event occurred\" object=\"webhook-4014/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0902 13:43:48.000145       1 event.go:294] \"Event occurred\" object=\"webhook-4014/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-d8qbn\"\nI0902 13:43:48.009320       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-4014/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:43:48.025217       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-4014/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:43:48.505850       1 tokens_controller.go:262] error synchronizing serviceaccount sctp-6835/default: secrets \"default-token-w8996\" is forbidden: unable to create new content in namespace sctp-6835 because it is being terminated\nE0902 13:43:48.599923       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:48.796993       1 pv_controller.go:930] claim \"provisioning-6958/pvc-7cxs5\" bound to volume \"local-hkwp6\"\nI0902 13:43:48.797149       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:43:48.807138       1 pv_controller.go:879] volume \"local-hkwp6\" entered phase \"Bound\"\nI0902 13:43:48.807172       1 pv_controller.go:982] volume \"local-hkwp6\" bound to claim \"provisioning-6958/pvc-7cxs5\"\nI0902 13:43:48.814358       1 pv_controller.go:823] claim \"provisioning-6958/pvc-7cxs5\" entered phase \"Bound\"\nI0902 13:43:48.814658       1 pv_controller.go:930] claim \"volume-8120/pvc-5gz4k\" bound to volume \"local-k5pdh\"\nI0902 13:43:48.829551       1 pv_controller.go:879] volume \"local-k5pdh\" entered phase \"Bound\"\nI0902 13:43:48.829587       1 pv_controller.go:982] volume \"local-k5pdh\" bound to claim \"volume-8120/pvc-5gz4k\"\nI0902 13:43:48.837137       1 pv_controller.go:823] claim \"volume-8120/pvc-5gz4k\" entered phase \"Bound\"\nI0902 13:43:48.837569       1 pv_controller.go:930] claim \"provisioning-4456/pvc-wb926\" bound to volume \"local-knjnd\"\nI0902 13:43:48.845663       1 pv_controller.go:879] volume \"local-knjnd\" entered phase \"Bound\"\nI0902 13:43:48.845693       1 pv_controller.go:982] volume \"local-knjnd\" bound to claim \"provisioning-4456/pvc-wb926\"\nI0902 13:43:48.853486       1 pv_controller.go:823] claim \"provisioning-4456/pvc-wb926\" entered phase \"Bound\"\nI0902 13:43:48.953325       1 stateful_set_control.go:555] StatefulSet statefulset-9084/ss2 terminating Pod ss2-1 for update\nI0902 13:43:48.961491       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0902 13:43:49.102672       1 namespace_controller.go:185] Namespace has been deleted kubectl-4357\nI0902 13:43:49.113440       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-8544-6297\nI0902 13:43:49.750526       1 pv_controller.go:879] volume \"pvc-e576dc71-7488-46c5-a1f8-9b9f98ecbf3a\" entered phase \"Bound\"\nI0902 13:43:49.750687       1 pv_controller.go:982] volume \"pvc-e576dc71-7488-46c5-a1f8-9b9f98ecbf3a\" bound to claim \"statefulset-6091/datadir-ss-1\"\nI0902 13:43:49.758055       1 pv_controller.go:823] claim \"statefulset-6091/datadir-ss-1\" entered phase \"Bound\"\nI0902 13:43:50.018442       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:50.020875       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nE0902 13:43:50.115901       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:43:50.420989       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e576dc71-7488-46c5-a1f8-9b9f98ecbf3a\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d018c4f19a1d9712\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nE0902 13:43:50.898199       1 namespace_controller.go:162] deletion of namespace kubectl-1430 failed: unexpected items still remain in namespace: kubectl-1430 for gvr: /v1, Resource=pods\nE0902 13:43:51.166328       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-4062/default: secrets \"default-token-bkg5d\" is forbidden: unable to create new content in namespace secrets-4062 because it is being terminated\nI0902 13:43:51.378044       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-bn2js\" objectUID=0bb9bcff-f6be-48d5-a810-6d78b0cc8c6d kind=\"Pod\" virtual=false\nI0902 13:43:51.378401       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-v6fzl\" objectUID=63623e95-5fb5-439e-9625-67bbb0714498 kind=\"Pod\" virtual=false\nI0902 13:43:51.378427       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-c8dt7\" objectUID=37f5d867-6433-450f-a50b-bed6d9f093c2 kind=\"Pod\" virtual=false\nI0902 13:43:51.378565       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-x4hkz\" objectUID=69b155f6-4920-4ca4-bbeb-47cefeaddc48 kind=\"Pod\" virtual=false\nI0902 13:43:51.378586       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-w5f8t\" objectUID=079d4888-43b4-4c4e-af3f-fa053e7eb6da kind=\"Pod\" virtual=false\nI0902 13:43:51.378673       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-fdrh4\" objectUID=25e25f14-f889-446c-849d-d8997c769e53 kind=\"Pod\" virtual=false\nI0902 13:43:51.378810       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-wt2r8\" objectUID=361b8894-48c0-41fd-91d0-509487f76810 kind=\"Pod\" virtual=false\nI0902 13:43:51.378833       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-9hb4b\" objectUID=f7f929b4-e241-4802-9d06-2a8f05609b47 kind=\"Pod\" virtual=false\nI0902 13:43:51.378906       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-hf6x5\" objectUID=5e23ee86-e7da-4c30-b9a3-1b84334fdc2f kind=\"Pod\" virtual=false\nI0902 13:43:51.379039       1 garbagecollector.go:471] \"Processing object\" object=\"disruption-6190/rs-wcm9w\" objectUID=511799c5-e515-425f-8b3e-28b6e921f971 kind=\"Pod\" virtual=false\nI0902 13:43:51.404196       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-x4hkz\" objectUID=69b155f6-4920-4ca4-bbeb-47cefeaddc48 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.404655       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-bn2js\" objectUID=0bb9bcff-f6be-48d5-a810-6d78b0cc8c6d kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.404731       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-wcm9w\" objectUID=511799c5-e515-425f-8b3e-28b6e921f971 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.404796       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-c8dt7\" objectUID=37f5d867-6433-450f-a50b-bed6d9f093c2 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.404937       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-hf6x5\" objectUID=5e23ee86-e7da-4c30-b9a3-1b84334fdc2f kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.404984       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-fdrh4\" objectUID=25e25f14-f889-446c-849d-d8997c769e53 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.405030       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-9hb4b\" objectUID=f7f929b4-e241-4802-9d06-2a8f05609b47 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.405075       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-wt2r8\" objectUID=361b8894-48c0-41fd-91d0-509487f76810 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.405120       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-w5f8t\" objectUID=079d4888-43b4-4c4e-af3f-fa053e7eb6da kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:51.405149       1 garbagecollector.go:580] \"Deleting object\" object=\"disruption-6190/rs-v6fzl\" objectUID=63623e95-5fb5-439e-9625-67bbb0714498 kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:52.062983       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5806^b5ca650f-0bf3-11ec-955d-9a42dd389afb\") on node \"ip-172-20-45-138.eu-central-1.compute.internal\" \nI0902 13:43:52.068098       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5806^b5ca650f-0bf3-11ec-955d-9a42dd389afb\") on node \"ip-172-20-45-138.eu-central-1.compute.internal\" \nI0902 13:43:52.078450       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-1710\nI0902 13:43:52.571173       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0902 13:43:52.609541       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-e34d427f-5384-43e9-aa09-c9d6070d5536\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-5806^b5ca650f-0bf3-11ec-955d-9a42dd389afb\") on node \"ip-172-20-45-138.eu-central-1.compute.internal\" \nI0902 13:43:53.026194       1 graph_builder.go:587] add [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.026396       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=99722b4d-2e6a-4116-b1d7-6f1d4ba79735 kind=\"Pod\" virtual=false\nI0902 13:43:53.026631       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-4x7pl\" objectUID=1b352339-361e-4609-adc5-dc346fc2446c kind=\"Pod\" virtual=false\nI0902 13:43:53.026809       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-xxcvb\" objectUID=1e12aeba-93d4-4590-bc8b-b4b4aab45479 kind=\"Pod\" virtual=false\nI0902 13:43:53.026894       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a kind=\"Pod\" virtual=false\nI0902 13:43:53.027031       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8d432735-9849-4bb6-a8bf-72673ca2c466 kind=\"Pod\" virtual=false\nI0902 13:43:53.027103       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bgzkm\" objectUID=d48f31fa-9e83-4e7a-a0c0-95aa23f815ba kind=\"Pod\" virtual=false\nI0902 13:43:53.027326       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-fcw6j\" objectUID=0a0a91aa-b75f-4c2e-ad39-8e29f57c5c27 kind=\"Pod\" virtual=false\nI0902 13:43:53.027401       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=33f9b771-973c-4604-99fb-8755b3a4edb8 kind=\"Pod\" virtual=false\nI0902 13:43:53.027588       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-snmvh\" objectUID=3967e62a-f2d1-4d1a-8089-8550c7756c5a kind=\"Pod\" virtual=false\nI0902 13:43:53.027733       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc\" objectUID=424abc8a-19ad-420a-9828-08e4c96a69dc kind=\"ReplicationController\" virtual=false\nI0902 13:43:53.027719       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-6rf4w\" objectUID=cb658bb4-03e1-4c45-9108-75f7a225b66f kind=\"Pod\" virtual=false\nI0902 13:43:53.037046       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-snmvh, uid: 3967e62a-f2d1-4d1a-8089-8550c7756c5a] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037072       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-6rf4w, uid: cb658bb4-03e1-4c45-9108-75f7a225b66f] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037291       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-77ktj, uid: 33f9b771-973c-4604-99fb-8755b3a4edb8] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037302       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-42hsl, uid: 9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037396       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-bvwmw, uid: 8d432735-9849-4bb6-a8bf-72673ca2c466] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037409       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-bgzkm, uid: d48f31fa-9e83-4e7a-a0c0-95aa23f815ba] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037418       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-fcw6j, uid: 0a0a91aa-b75f-4c2e-ad39-8e29f57c5c27] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037572       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-qj4rs, uid: 99722b4d-2e6a-4116-b1d7-6f1d4ba79735] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037581       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-4x7pl, uid: 1b352339-361e-4609-adc5-dc346fc2446c] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.037682       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-9324, name: simpletest.rc-xxcvb, uid: 1e12aeba-93d4-4590-bc8b-b4b4aab45479] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc] is deletingDependents\nI0902 13:43:53.041860       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-42hsl, uid: 9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.043994       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-snmvh, uid: 3967e62a-f2d1-4d1a-8089-8550c7756c5a] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.044197       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-qj4rs, uid: 99722b4d-2e6a-4116-b1d7-6f1d4ba79735] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.044384       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-77ktj, uid: 33f9b771-973c-4604-99fb-8755b3a4edb8] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.044517       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-6rf4w, uid: cb658bb4-03e1-4c45-9108-75f7a225b66f] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.044661       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-xxcvb, uid: 1e12aeba-93d4-4590-bc8b-b4b4aab45479] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.044773       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-fcw6j, uid: 0a0a91aa-b75f-4c2e-ad39-8e29f57c5c27] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.044905       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-4x7pl, uid: 1b352339-361e-4609-adc5-dc346fc2446c] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.045060       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-bvwmw, uid: 8d432735-9849-4bb6-a8bf-72673ca2c466] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.045246       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-9324, name: simpletest.rc-bgzkm, uid: d48f31fa-9e83-4e7a-a0c0-95aa23f815ba] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:43:53.051724       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a kind=\"Pod\" virtual=false\nI0902 13:43:53.054259       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-42hsl, uid: 9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.054302       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9dfdafff-db7a-45e2-bcf5-382dae3cfd38 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.057710       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=99722b4d-2e6a-4116-b1d7-6f1d4ba79735 kind=\"Pod\" virtual=false\nI0902 13:43:53.057937       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-qj4rs, uid: 99722b4d-2e6a-4116-b1d7-6f1d4ba79735] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.057972       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=5b8a9fe6-cd37-4a3d-a8df-2ae4e2bdd03d kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.061202       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-bgzkm, uid: d48f31fa-9e83-4e7a-a0c0-95aa23f815ba] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.061239       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bgzkm\" objectUID=e6d657c9-39ad-4030-8822-ce15dd6321a5 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.062552       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-6rf4w\" objectUID=cb658bb4-03e1-4c45-9108-75f7a225b66f kind=\"Pod\" virtual=false\nI0902 13:43:53.062649       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=33f9b771-973c-4604-99fb-8755b3a4edb8 kind=\"Pod\" virtual=false\nI0902 13:43:53.062707       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bgzkm\" objectUID=d48f31fa-9e83-4e7a-a0c0-95aa23f815ba kind=\"Pod\" virtual=false\nI0902 13:43:53.062763       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8d432735-9849-4bb6-a8bf-72673ca2c466 kind=\"Pod\" virtual=false\nI0902 13:43:53.064803       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-4x7pl\" objectUID=1b352339-361e-4609-adc5-dc346fc2446c kind=\"Pod\" virtual=false\nI0902 13:43:53.064879       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-xxcvb\" objectUID=1e12aeba-93d4-4590-bc8b-b4b4aab45479 kind=\"Pod\" virtual=false\nI0902 13:43:53.070152       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-6rf4w, uid: cb658bb4-03e1-4c45-9108-75f7a225b66f] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.070195       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-6rf4w\" objectUID=1a7b036b-3f70-44eb-9925-3592be2129a2 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.070454       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-bvwmw, uid: 8d432735-9849-4bb6-a8bf-72673ca2c466] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.070488       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8c9a37af-f5ae-49b4-b49c-78aa96cccba7 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.070533       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-77ktj, uid: 33f9b771-973c-4604-99fb-8755b3a4edb8] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.070559       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=b293f71a-bbce-45bb-b9ae-0778e9b91a7a kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.072210       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-xxcvb, uid: 1e12aeba-93d4-4590-bc8b-b4b4aab45479] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.072258       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-xxcvb\" objectUID=74093f3f-4e29-4028-8eae-57c95538194d kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.072426       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-4x7pl, uid: 1b352339-361e-4609-adc5-dc346fc2446c] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.072548       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-4x7pl\" objectUID=8f45f60f-a2f1-435f-a317-495c0af2ed21 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.073682       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-fcw6j, uid: 0a0a91aa-b75f-4c2e-ad39-8e29f57c5c27] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.073716       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-fcw6j\" objectUID=6d8f7d80-47f1-4837-8ee8-f9ffbb47f632 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.085724       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-fcw6j\" objectUID=0a0a91aa-b75f-4c2e-ad39-8e29f57c5c27 kind=\"Pod\" virtual=false\nI0902 13:43:53.101332       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-snmvh\" objectUID=3967e62a-f2d1-4d1a-8089-8550c7756c5a kind=\"Pod\" virtual=false\nI0902 13:43:53.101700       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02fdcca57b1c17075\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:53.102474       1 graph_builder.go:587] add [v1/Pod, namespace: gc-9324, name: simpletest.rc-snmvh, uid: 3967e62a-f2d1-4d1a-8089-8550c7756c5a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:43:53.102515       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-snmvh\" objectUID=b05370ef-0b73-422f-a2a3-53c1819ae012 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:53.110010       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02fdcca57b1c17075\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:53.129413       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: gc-9324, name: simpletest.rc-42hsl, uid: 9dfdafff-db7a-45e2-bcf5-382dae3cfd38] to attemptToDelete, because its owner [v1/Pod, namespace: gc-9324, name: simpletest.rc-42hsl, uid: 9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a] is deletingDependents\nI0902 13:43:53.129466       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a kind=\"Pod\" virtual=false\nI0902 13:43:53.228741       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: gc-9324, name: simpletest.rc-qj4rs, uid: 5b8a9fe6-cd37-4a3d-a8df-2ae4e2bdd03d] to attemptToDelete, because its owner [v1/Pod, namespace: gc-9324, name: simpletest.rc-qj4rs, uid: 99722b4d-2e6a-4116-b1d7-6f1d4ba79735] is deletingDependents\nI0902 13:43:53.228787       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=99722b4d-2e6a-4116-b1d7-6f1d4ba79735 kind=\"Pod\" virtual=false\nI0902 13:43:53.379914       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-6rf4w, uid: cb658bb4-03e1-4c45-9108-75f7a225b66f]\nI0902 13:43:53.431605       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: gc-9324, name: simpletest.rc-77ktj, uid: b293f71a-bbce-45bb-b9ae-0778e9b91a7a] to attemptToDelete, because its owner [v1/Pod, namespace: gc-9324, name: simpletest.rc-77ktj, uid: 33f9b771-973c-4604-99fb-8755b3a4edb8] is deletingDependents\nI0902 13:43:53.431654       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=33f9b771-973c-4604-99fb-8755b3a4edb8 kind=\"Pod\" virtual=false\nI0902 13:43:53.487428       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-bgzkm, uid: d48f31fa-9e83-4e7a-a0c0-95aa23f815ba]\nI0902 13:43:53.528407       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: gc-9324, name: simpletest.rc-bvwmw, uid: 8c9a37af-f5ae-49b4-b49c-78aa96cccba7] to attemptToDelete, because its owner [v1/Pod, namespace: gc-9324, name: simpletest.rc-bvwmw, uid: 8d432735-9849-4bb6-a8bf-72673ca2c466] is deletingDependents\nI0902 13:43:53.528456       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8d432735-9849-4bb6-a8bf-72673ca2c466 kind=\"Pod\" virtual=false\nI0902 13:43:53.578268       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-4x7pl, uid: 1b352339-361e-4609-adc5-dc346fc2446c]\nI0902 13:43:53.628336       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-xxcvb, uid: 1e12aeba-93d4-4590-bc8b-b4b4aab45479]\nI0902 13:43:53.669355       1 namespace_controller.go:185] Namespace has been deleted sctp-6835\nI0902 13:43:53.987963       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-fcw6j, uid: 0a0a91aa-b75f-4c2e-ad39-8e29f57c5c27]\nI0902 13:43:54.031005       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-snmvh, uid: 3967e62a-f2d1-4d1a-8089-8550c7756c5a]\nI0902 13:43:54.130316       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-42hsl, uid: 9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a]\nI0902 13:43:54.179064       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9dfdafff-db7a-45e2-bcf5-382dae3cfd38 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:54.228443       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-qj4rs, uid: 99722b4d-2e6a-4116-b1d7-6f1d4ba79735]\nI0902 13:43:54.251705       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-4489-6513/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0902 13:43:54.279080       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=5b8a9fe6-cd37-4a3d-a8df-2ae4e2bdd03d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:54.333770       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9324/simpletest.rc-bgzkm\" objectUID=e6d657c9-39ad-4030-8822-ce15dd6321a5 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:54.431150       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-5411/awskp5f4\"\nI0902 13:43:54.434866       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: gc-9324, name: simpletest.rc-77ktj, uid: b293f71a-bbce-45bb-b9ae-0778e9b91a7a] to attemptToDelete, because its owner [v1/Pod, namespace: gc-9324, name: simpletest.rc-77ktj, uid: 33f9b771-973c-4604-99fb-8755b3a4edb8] is deletingDependents\nI0902 13:43:54.441600       1 pv_controller.go:640] volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:43:54.447012       1 pv_controller.go:879] volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" entered phase \"Released\"\nI0902 13:43:54.453291       1 pv_controller.go:1340] isVolumeReleased[pvc-15ba64ec-92b9-4719-98c9-859e56310ebd]: volume is released\nI0902 13:43:54.457856       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=33f9b771-973c-4604-99fb-8755b3a4edb8 kind=\"Pod\" virtual=false\nI0902 13:43:54.530336       1 request.go:665] Waited for 1.001646046s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1/api/v1/namespaces/gc-9324/pods/simpletest.rc-bvwmw\nI0902 13:43:54.533216       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-bvwmw, uid: 8d432735-9849-4bb6-a8bf-72673ca2c466]\nI0902 13:43:54.564825       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4014/e2e-test-webhook-9bc26\" objectUID=8c0f3b66-8047-4ace-878e-3e37e2d30652 kind=\"EndpointSlice\" virtual=false\nI0902 13:43:54.678974       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8c9a37af-f5ae-49b4-b49c-78aa96cccba7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:54.683040       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4014/sample-webhook-deployment-78988fc6cd\" objectUID=6ba83754-272b-46b2-895f-7e031b1817f6 kind=\"ReplicaSet\" virtual=false\nI0902 13:43:54.683291       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-4014/sample-webhook-deployment\"\nI0902 13:43:54.728714       1 garbagecollector.go:580] \"Deleting object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=b293f71a-bbce-45bb-b9ae-0778e9b91a7a kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0902 13:43:54.929351       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-42hsl\", UID:\"9dfdafff-db7a-45e2-bcf5-382dae3cfd38\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-9324\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-42hsl\", UID:\"9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0x4000dfa7da)}}}: ciliumendpoints.cilium.io \"simpletest.rc-42hsl\" not found\nI0902 13:43:54.929395       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9dfdafff-db7a-45e2-bcf5-382dae3cfd38 kind=\"CiliumEndpoint\" virtual=false\nE0902 13:43:55.029488       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-qj4rs\", UID:\"5b8a9fe6-cd37-4a3d-a8df-2ae4e2bdd03d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-9324\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-qj4rs\", UID:\"99722b4d-2e6a-4116-b1d7-6f1d4ba79735\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0x4002a52a22)}}}: ciliumendpoints.cilium.io \"simpletest.rc-qj4rs\" not found\nI0902 13:43:55.029671       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=5b8a9fe6-cd37-4a3d-a8df-2ae4e2bdd03d kind=\"CiliumEndpoint\" virtual=false\nE0902 13:43:55.079239       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-bgzkm\", UID:\"e6d657c9-39ad-4030-8822-ce15dd6321a5\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-9324\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-bgzkm\", UID:\"d48f31fa-9e83-4e7a-a0c0-95aa23f815ba\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0x4000dfa4da)}}}: ciliumendpoints.cilium.io \"simpletest.rc-bgzkm\" not found\nI0902 13:43:55.084378       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bgzkm\" objectUID=e6d657c9-39ad-4030-8822-ce15dd6321a5 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:55.132428       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc\" objectUID=424abc8a-19ad-420a-9828-08e4c96a69dc kind=\"ReplicationController\" virtual=false\nI0902 13:43:55.134071       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-6rf4w\" objectUID=cb658bb4-03e1-4c45-9108-75f7a225b66f kind=\"Pod\" virtual=false\nI0902 13:43:55.179391       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: gc-9324, name: simpletest.rc-77ktj, uid: 33f9b771-973c-4604-99fb-8755b3a4edb8]\nI0902 13:43:55.235896       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bgzkm\" objectUID=d48f31fa-9e83-4e7a-a0c0-95aa23f815ba kind=\"Pod\" virtual=false\nI0902 13:43:55.328927       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-4014/e2e-test-webhook-9bc26\" objectUID=8c0f3b66-8047-4ace-878e-3e37e2d30652 kind=\"EndpointSlice\" propagationPolicy=Background\nI0902 13:43:55.385081       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-4x7pl\" objectUID=1b352339-361e-4609-adc5-dc346fc2446c kind=\"Pod\" virtual=false\nI0902 13:43:55.396798       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" need=1 creating=1\nI0902 13:43:55.397644       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 1\"\nI0902 13:43:55.404973       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:43:55.407887       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-7qjlr\"\nI0902 13:43:55.450267       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-xxcvb\" objectUID=1e12aeba-93d4-4590-bc8b-b4b4aab45479 kind=\"Pod\" virtual=false\nE0902 13:43:55.478419       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-bvwmw\", UID:\"8c9a37af-f5ae-49b4-b49c-78aa96cccba7\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-9324\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-bvwmw\", UID:\"8d432735-9849-4bb6-a8bf-72673ca2c466\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0x400329706a)}}}: ciliumendpoints.cilium.io \"simpletest.rc-bvwmw\" not found\nI0902 13:43:55.478586       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8c9a37af-f5ae-49b4-b49c-78aa96cccba7 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:55.529092       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-4014/sample-webhook-deployment-78988fc6cd\" objectUID=6ba83754-272b-46b2-895f-7e031b1817f6 kind=\"ReplicaSet\" propagationPolicy=Background\nE0902 13:43:55.578520       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"simpletest.rc-77ktj\", UID:\"b293f71a-bbce-45bb-b9ae-0778e9b91a7a\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-9324\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest.rc-77ktj\", UID:\"33f9b771-973c-4604-99fb-8755b3a4edb8\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0x40037eae8a)}}}: ciliumendpoints.cilium.io \"simpletest.rc-77ktj\" not found\nI0902 13:43:55.578565       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=b293f71a-bbce-45bb-b9ae-0778e9b91a7a kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:55.633029       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-fcw6j\" objectUID=0a0a91aa-b75f-4c2e-ad39-8e29f57c5c27 kind=\"Pod\" virtual=false\nI0902 13:43:55.682885       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-snmvh\" objectUID=3967e62a-f2d1-4d1a-8089-8550c7756c5a kind=\"Pod\" virtual=false\nI0902 13:43:55.687688       1 event.go:294] \"Event occurred\" object=\"volume-4814/aws22w5c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:43:55.779004       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9dfdafff-db7a-45e2-bcf5-382dae3cfd38 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:55.831995       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=99722b4d-2e6a-4116-b1d7-6f1d4ba79735 kind=\"Pod\" virtual=false\nI0902 13:43:55.879199       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-qj4rs\" objectUID=5b8a9fe6-cd37-4a3d-a8df-2ae4e2bdd03d kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:55.916912       1 event.go:294] \"Event occurred\" object=\"volume-4814/aws22w5c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:43:55.985282       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc\" objectUID=424abc8a-19ad-420a-9828-08e4c96a69dc kind=\"ReplicationController\" virtual=false\nI0902 13:43:56.184835       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8d432735-9849-4bb6-a8bf-72673ca2c466 kind=\"Pod\" virtual=false\nI0902 13:43:56.303966       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-6468\nI0902 13:43:56.379548       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-bvwmw\" objectUID=8c9a37af-f5ae-49b4-b49c-78aa96cccba7 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:56.382793       1 namespace_controller.go:185] Namespace has been deleted secrets-4062\nI0902 13:43:56.437088       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4014/sample-webhook-deployment-78988fc6cd-d8qbn\" objectUID=5a43b553-5722-492d-98da-be144775fdab kind=\"Pod\" virtual=false\nI0902 13:43:56.440229       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-9f0085ba-0355-4b70-8dca-1dba4a3a823d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1618^4\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:43:56.442022       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-9f0085ba-0355-4b70-8dca-1dba4a3a823d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1618^4\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:43:56.463131       1 namespace_controller.go:185] Namespace has been deleted secret-namespace-5827\nI0902 13:43:56.488637       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-77ktj\" objectUID=b293f71a-bbce-45bb-b9ae-0778e9b91a7a kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:56.751153       1 pv_controller_base.go:505] deletion of claim \"volume-3640/pvc-6t5sw\" was already processed\nI0902 13:43:56.830229       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc\" objectUID=424abc8a-19ad-420a-9828-08e4c96a69dc kind=\"ReplicationController\" virtual=false\nI0902 13:43:56.982935       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-9f0085ba-0355-4b70-8dca-1dba4a3a823d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1618^4\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:43:57.030567       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-4014/sample-webhook-deployment-78988fc6cd-d8qbn\" objectUID=5a43b553-5722-492d-98da-be144775fdab kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:57.142203       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc-42hsl\" objectUID=9b1ffd13-aa2a-4b8f-a6ee-c1a2dfa35a6a kind=\"Pod\" virtual=false\nI0902 13:43:57.182736       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/ReplicationController, namespace: gc-9324, name: simpletest.rc, uid: 424abc8a-19ad-420a-9828-08e4c96a69dc]\nI0902 13:43:57.201085       1 namespace_controller.go:185] Namespace has been deleted kubectl-2434\nI0902 13:43:57.245333       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-4014/sample-webhook-deployment-78988fc6cd-d8qbn\" objectUID=2df69005-23da-4ebf-8bd7-e7ed7f6b14d3 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:57.361986       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-1618/pvc-2xcv5\"\nI0902 13:43:57.377338       1 pv_controller.go:640] volume \"pvc-9f0085ba-0355-4b70-8dca-1dba4a3a823d\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:43:57.381193       1 pv_controller.go:879] volume \"pvc-9f0085ba-0355-4b70-8dca-1dba4a3a823d\" entered phase \"Released\"\nI0902 13:43:57.386701       1 pv_controller.go:1340] isVolumeReleased[pvc-9f0085ba-0355-4b70-8dca-1dba4a3a823d]: volume is released\nI0902 13:43:57.401334       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-1618/pvc-2xcv5\" was already processed\nI0902 13:43:57.433013       1 garbagecollector.go:471] \"Processing object\" object=\"gc-9324/simpletest.rc\" objectUID=424abc8a-19ad-420a-9828-08e4c96a69dc kind=\"ReplicationController\" virtual=false\nI0902 13:43:57.508822       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-whgq2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0124bd400711dc447\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:57.974366       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-e576dc71-7488-46c5-a1f8-9b9f98ecbf3a\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0d018c4f19a1d9712\") from node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:43:57.974543       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e576dc71-7488-46c5-a1f8-9b9f98ecbf3a\\\" \"\nI0902 13:43:58.519799       1 garbagecollector.go:471] \"Processing object\" object=\"conntrack-178/pod-server-1\" objectUID=2b91486e-ce30-488d-9504-b7b10d6ad0bc kind=\"CiliumEndpoint\" virtual=false\nI0902 13:43:58.527004       1 garbagecollector.go:580] \"Deleting object\" object=\"conntrack-178/pod-server-1\" objectUID=2b91486e-ce30-488d-9504-b7b10d6ad0bc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:43:58.532238       1 namespace_controller.go:185] Namespace has been deleted provisioning-8133\nW0902 13:43:58.637540       1 utils.go:265] Service services-9188/service-headless-toggled using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:43:59.150221       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5806\nI0902 13:43:59.236396       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806-4162/csi-hostpathplugin-8cc98ffb\" objectUID=3185b3df-34c6-468a-9dd9-2ed1b81852dc kind=\"ControllerRevision\" virtual=false\nI0902 13:43:59.236406       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-5806-4162/csi-hostpathplugin\nI0902 13:43:59.236537       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-5806-4162/csi-hostpathplugin-0\" objectUID=b131b3eb-60dc-49fa-aec2-092423e215ca kind=\"Pod\" virtual=false\nI0902 13:43:59.239179       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5806-4162/csi-hostpathplugin-0\" objectUID=b131b3eb-60dc-49fa-aec2-092423e215ca kind=\"Pod\" propagationPolicy=Background\nI0902 13:43:59.239488       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-5806-4162/csi-hostpathplugin-8cc98ffb\" objectUID=3185b3df-34c6-468a-9dd9-2ed1b81852dc kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:43:59.254921       1 pv_controller.go:879] volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" entered phase \"Bound\"\nI0902 13:43:59.255042       1 pv_controller.go:982] volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" bound to claim \"volume-4814/aws22w5c\"\nI0902 13:43:59.265212       1 pv_controller.go:823] claim \"volume-4814/aws22w5c\" entered phase \"Bound\"\nI0902 13:43:59.353367       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-8120/pvc-5gz4k\"\nI0902 13:43:59.362587       1 pv_controller.go:640] volume \"local-k5pdh\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:43:59.371177       1 pv_controller.go:879] volume \"local-k5pdh\" entered phase \"Released\"\nI0902 13:43:59.482483       1 pv_controller_base.go:505] deletion of claim \"volume-8120/pvc-5gz4k\" was already processed\nE0902 13:43:59.506602       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-4014-markers/default: secrets \"default-token-gtpkg\" is forbidden: unable to create new content in namespace webhook-4014-markers because it is being terminated\nE0902 13:43:59.548254       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-4014/default: secrets \"default-token-jw5l8\" is forbidden: unable to create new content in namespace webhook-4014 because it is being terminated\nI0902 13:43:59.973312       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f40356ec66ad3c24\") from node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:00.077122       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-4489/pvc-nwshn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4489\\\" or manually created by system administrator\"\nI0902 13:44:00.077367       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-4489/pvc-nwshn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4489\\\" or manually created by system administrator\"\nI0902 13:44:00.090087       1 pv_controller.go:879] volume \"pvc-5b63f468-e842-42ef-ab41-972828857a76\" entered phase \"Bound\"\nI0902 13:44:00.090125       1 pv_controller.go:982] volume \"pvc-5b63f468-e842-42ef-ab41-972828857a76\" bound to claim \"csi-mock-volumes-4489/pvc-nwshn\"\nI0902 13:44:00.097303       1 pv_controller.go:823] claim \"csi-mock-volumes-4489/pvc-nwshn\" entered phase \"Bound\"\nI0902 13:44:00.113012       1 job_controller.go:406] enqueueing job cronjob-9072/concurrent-27176504\nI0902 13:44:00.113481       1 event.go:294] \"Event occurred\" object=\"cronjob-9072/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27176504\"\nI0902 13:44:00.123570       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-9072/concurrent\" resourceVersion=\"26379\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:44:00.123735       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-9072/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0902 13:44:00.125050       1 job_controller.go:406] enqueueing job cronjob-9072/concurrent-27176504\nI0902 13:44:00.126081       1 event.go:294] \"Event occurred\" object=\"cronjob-9072/concurrent-27176504\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27176504--1-znn5v\"\nI0902 13:44:00.137157       1 job_controller.go:406] enqueueing job cronjob-9072/concurrent-27176504\nI0902 13:44:00.145327       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176504\nI0902 13:44:00.149171       1 job_controller.go:406] enqueueing job cronjob-9072/concurrent-27176504\nI0902 13:44:00.149479       1 event.go:294] \"Event occurred\" object=\"cronjob-5301/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27176504\"\nI0902 13:44:00.159061       1 job_controller.go:406] enqueueing job cronjob-9072/concurrent-27176504\nI0902 13:44:00.169088       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176504\nI0902 13:44:00.169519       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-5301/concurrent\" resourceVersion=\"26219\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:44:00.169637       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-5301/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0902 13:44:00.170079       1 event.go:294] \"Event occurred\" object=\"cronjob-5301/concurrent-27176504\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27176504--1-zltc5\"\nI0902 13:44:00.174799       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176504\nI0902 13:44:00.185918       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176504\nE0902 13:44:00.807051       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-3953/default: secrets \"default-token-mt9wh\" is forbidden: unable to create new content in namespace downward-api-3953 because it is being terminated\nI0902 13:44:00.921877       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-5301/concurrent-27176504\" objectUID=dd3e71b0-0dd5-41d6-99bd-b84c8a675642 kind=\"Job\" virtual=false\nI0902 13:44:00.922113       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-5301/concurrent-27176503\" objectUID=ed0aafd0-570f-4faf-b268-18d63adb6f5f kind=\"Job\" virtual=false\nI0902 13:44:00.923767       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-5301/concurrent-27176504\" objectUID=dd3e71b0-0dd5-41d6-99bd-b84c8a675642 kind=\"Job\" propagationPolicy=Background\nI0902 13:44:00.924421       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-5301/concurrent-27176503\" objectUID=ed0aafd0-570f-4faf-b268-18d63adb6f5f kind=\"Job\" propagationPolicy=Background\nI0902 13:44:00.926983       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-5301/concurrent-27176503--1-6jlzb\" objectUID=6719325a-8e7d-4eeb-8021-f819f9a5940e kind=\"Pod\" virtual=false\nI0902 13:44:00.927191       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176503\nI0902 13:44:00.928479       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-5301/concurrent-27176504--1-zltc5\" objectUID=8f49c8ce-cb7b-4fbc-8b3a-15ef7dabc9b5 kind=\"Pod\" virtual=false\nI0902 13:44:00.928757       1 job_controller.go:406] enqueueing job cronjob-5301/concurrent-27176504\nI0902 13:44:00.930946       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-5301/concurrent-27176503--1-6jlzb\" objectUID=6719325a-8e7d-4eeb-8021-f819f9a5940e kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:00.931154       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-5301/concurrent-27176504--1-zltc5\" objectUID=8f49c8ce-cb7b-4fbc-8b3a-15ef7dabc9b5 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:01.436203       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-7521/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0902 13:44:01.437950       1 event.go:294] \"Event occurred\" object=\"webhook-7521/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0902 13:44:01.450862       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-7521/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:01.456889       1 event.go:294] \"Event occurred\" object=\"webhook-7521/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-rt72n\"\nI0902 13:44:01.732062       1 job_controller.go:406] enqueueing job cronjob-9072/concurrent-27176504\nI0902 13:44:01.732631       1 event.go:294] \"Event occurred\" object=\"cronjob-9072/concurrent-27176504\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0902 13:44:01.742444       1 job_controller.go:406] enqueueing job cronjob-9072/concurrent-27176504\nI0902 13:44:01.743119       1 event.go:294] \"Event occurred\" object=\"cronjob-9072/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-27176504, status: Complete\"\nI0902 13:44:02.239500       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f40356ec66ad3c24\") from node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:02.239858       1 event.go:294] \"Event occurred\" object=\"volume-4814/exec-volume-test-dynamicpv-kbfw\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-480c953b-70ce-407b-ad71-530268a99d34\\\" \"\nI0902 13:44:02.382503       1 stateful_set_control.go:555] StatefulSet statefulset-7854/ss2 terminating Pod ss2-2 for update\nI0902 13:44:02.391469       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nE0902 13:44:02.800528       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-1618/default: secrets \"default-token-452xh\" is forbidden: unable to create new content in namespace csi-mock-volumes-1618 because it is being terminated\nE0902 13:44:03.510442       1 tokens_controller.go:262] error synchronizing serviceaccount ingressclass-9346/default: secrets \"default-token-lp6zd\" is forbidden: unable to create new content in namespace ingressclass-9346 because it is being terminated\nI0902 13:44:03.682871       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-4456/pvc-wb926\"\nI0902 13:44:03.691089       1 pv_controller.go:640] volume \"local-knjnd\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:44:03.694742       1 pv_controller.go:879] volume \"local-knjnd\" entered phase \"Released\"\nI0902 13:44:03.795452       1 pv_controller_base.go:505] deletion of claim \"provisioning-4456/pvc-wb926\" was already processed\nI0902 13:44:03.802498       1 pv_controller.go:1340] isVolumeReleased[pvc-15ba64ec-92b9-4719-98c9-859e56310ebd]: volume is released\nI0902 13:44:04.162697       1 stateful_set_control.go:555] StatefulSet statefulset-9084/ss2 terminating Pod ss2-0 for update\nI0902 13:44:04.171533       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0902 13:44:04.571296       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" need=2 deleting=1\nI0902 13:44:04.571745       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0902 13:44:04.571929       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-9199/test-rolling-update-with-lb-864fb64577-p4lvx\"\nI0902 13:44:04.573886       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 2\"\nI0902 13:44:04.595266       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577-p4lvx\" objectUID=e6fd7955-c175-45eb-a53e-c587dd90c413 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:04.596526       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-p4lvx\"\nI0902 13:44:04.607429       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577-p4lvx\" objectUID=e6fd7955-c175-45eb-a53e-c587dd90c413 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:04.608026       1 namespace_controller.go:185] Namespace has been deleted webhook-4014\nI0902 13:44:04.613778       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:04.614230       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nI0902 13:44:04.614337       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint deployment-9199/test-rolling-update-with-lb: Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:04.614670       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" need=2 creating=1\nI0902 13:44:04.626715       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"test-rolling-update-with-lb-5ff6986c95\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:04.627710       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-rs5m2\"\nI0902 13:44:04.648958       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:04.690457       1 namespace_controller.go:185] Namespace has been deleted webhook-4014-markers\nI0902 13:44:04.757972       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-2642/csi-hostpathjjr5w\"\nI0902 13:44:04.773550       1 pv_controller.go:640] volume \"pvc-2f180fd2-f137-423c-a333-4e38635d6b2b\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:44:04.779976       1 pv_controller.go:879] volume \"pvc-2f180fd2-f137-423c-a333-4e38635d6b2b\" entered phase \"Released\"\nI0902 13:44:04.784371       1 pv_controller.go:1340] isVolumeReleased[pvc-2f180fd2-f137-423c-a333-4e38635d6b2b]: volume is released\nE0902 13:44:04.809716       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-5806-4162/default: secrets \"default-token-tzvwt\" is forbidden: unable to create new content in namespace ephemeral-5806-4162 because it is being terminated\nI0902 13:44:04.827284       1 pv_controller_base.go:505] deletion of claim \"volume-expand-2642/csi-hostpathjjr5w\" was already processed\nE0902 13:44:05.081258       1 tokens_controller.go:262] error synchronizing serviceaccount volume-3640/default: secrets \"default-token-n9tfn\" is forbidden: unable to create new content in namespace volume-3640 because it is being terminated\nI0902 13:44:05.220329       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6958/pvc-7cxs5\"\nI0902 13:44:05.235903       1 pv_controller.go:640] volume \"local-hkwp6\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:44:05.239948       1 pv_controller.go:879] volume \"local-hkwp6\" entered phase \"Released\"\nI0902 13:44:05.331413       1 pv_controller_base.go:505] deletion of claim \"provisioning-6958/pvc-7cxs5\" was already processed\nI0902 13:44:05.760032       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" need=1 deleting=1\nI0902 13:44:05.760704       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0902 13:44:05.760645       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 1\"\nI0902 13:44:05.761026       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-9199/test-rolling-update-with-lb-864fb64577-82xs7\"\nI0902 13:44:05.769057       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" need=3 creating=1\nI0902 13:44:05.771496       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 3\"\nI0902 13:44:05.777370       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-j9h7m\"\nI0902 13:44:05.790906       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577-82xs7\" objectUID=0889a595-89ed-4ea8-bb5d-57941fdd2c5b kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:05.825144       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577-82xs7\" objectUID=0889a595-89ed-4ea8-bb5d-57941fdd2c5b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:05.826523       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-82xs7\"\nI0902 13:44:05.892255       1 pv_controller.go:1340] isVolumeReleased[pvc-15ba64ec-92b9-4719-98c9-859e56310ebd]: volume is released\nI0902 13:44:05.912055       1 namespace_controller.go:185] Namespace has been deleted downward-api-3953\nI0902 13:44:06.048525       1 pv_controller_base.go:505] deletion of claim \"fsgroupchangepolicy-5411/awskp5f4\" was already processed\nI0902 13:44:06.140274       1 pv_controller.go:879] volume \"nfs-4j9tx\" entered phase \"Available\"\nE0902 13:44:06.174566       1 tokens_controller.go:262] error synchronizing serviceaccount cronjob-5301/default: secrets \"default-token-68jx2\" is forbidden: unable to create new content in namespace cronjob-5301 because it is being terminated\nI0902 13:44:06.203258       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0902 13:44:06.248526       1 pv_controller.go:930] claim \"pv-7129/pvc-6b8qk\" bound to volume \"nfs-4j9tx\"\nI0902 13:44:06.258081       1 pv_controller.go:879] volume \"nfs-4j9tx\" entered phase \"Bound\"\nI0902 13:44:06.258150       1 pv_controller.go:982] volume \"nfs-4j9tx\" bound to claim \"pv-7129/pvc-6b8qk\"\nI0902 13:44:06.268933       1 pv_controller.go:823] claim \"pv-7129/pvc-6b8qk\" entered phase \"Bound\"\nI0902 13:44:06.365929       1 event.go:294] \"Event occurred\" object=\"ephemeral-9062-6189/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0902 13:44:06.737882       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-15ba64ec-92b9-4719-98c9-859e56310ebd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02fdcca57b1c17075\") on node \"ip-172-20-49-181.eu-central-1.compute.internal\" \nI0902 13:44:07.027147       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-5d689478b8\" objectUID=7156718b-3e53-4a78-8af7-a000b2c7c590 kind=\"ControllerRevision\" virtual=false\nI0902 13:44:07.027159       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1618-9736/csi-mockplugin\nI0902 13:44:07.027210       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-0\" objectUID=b06a3d45-0d4a-4729-91e5-a7c82b5ead10 kind=\"Pod\" virtual=false\nI0902 13:44:07.030654       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-5d689478b8\" objectUID=7156718b-3e53-4a78-8af7-a000b2c7c590 kind=\"ControllerRevision\" propagationPolicy=Background\nE0902 13:44:07.031211       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-7632/pvc-865x5: storageclass.storage.k8s.io \"provisioning-7632\" not found\nI0902 13:44:07.031302       1 event.go:294] \"Event occurred\" object=\"provisioning-7632/pvc-865x5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7632\\\" not found\"\nI0902 13:44:07.032435       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-0\" objectUID=b06a3d45-0d4a-4729-91e5-a7c82b5ead10 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:07.134528       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-attacher-5c8f78c866\" objectUID=52d6a2f8-d237-4a8f-ba5d-d134edf740c3 kind=\"ControllerRevision\" virtual=false\nI0902 13:44:07.134822       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1618-9736/csi-mockplugin-attacher\nI0902 13:44:07.135094       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-attacher-0\" objectUID=39469958-d94a-4023-8647-0629ddf7df9b kind=\"Pod\" virtual=false\nI0902 13:44:07.136668       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-attacher-5c8f78c866\" objectUID=52d6a2f8-d237-4a8f-ba5d-d134edf740c3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:44:07.137053       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1618-9736/csi-mockplugin-attacher-0\" objectUID=39469958-d94a-4023-8647-0629ddf7df9b kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:07.145119       1 pv_controller.go:879] volume \"local-6vs2d\" entered phase \"Available\"\nI0902 13:44:07.400525       1 event.go:294] \"Event occurred\" object=\"statefulset-522/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:44:07.400814       1 event.go:294] \"Event occurred\" object=\"statefulset-522/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0902 13:44:07.410819       1 event.go:294] \"Event occurred\" object=\"statefulset-522/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0902 13:44:07.425464       1 event.go:294] \"Event occurred\" object=\"statefulset-522/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:44:07.509106       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1767/condition-test\" need=3 creating=3\nI0902 13:44:07.519701       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-tkhzv\"\nI0902 13:44:07.528791       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-jhgpz\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0902 13:44:07.530865       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1767/condition-test\nI0902 13:44:07.531959       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-qkxdj\"\nE0902 13:44:07.536211       1 replica_set.go:536] sync \"replication-controller-1767/condition-test\" failed with pods \"condition-test-jhgpz\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0902 13:44:07.537516       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1767/condition-test\" need=3 creating=1\nI0902 13:44:07.541434       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1767/condition-test\nI0902 13:44:07.541740       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-p84cr\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0902 13:44:07.550879       1 replica_set.go:536] sync \"replication-controller-1767/condition-test\" failed with pods \"condition-test-p84cr\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0902 13:44:07.551602       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1767/condition-test\" need=3 creating=1\nI0902 13:44:07.557453       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1767/condition-test\nE0902 13:44:07.557611       1 replica_set.go:536] sync \"replication-controller-1767/condition-test\" failed with pods \"condition-test-qmlsg\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0902 13:44:07.557887       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-qmlsg\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0902 13:44:07.558424       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1767/condition-test\" need=3 creating=1\nI0902 13:44:07.560059       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1767/condition-test\nI0902 13:44:07.560372       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-qkvdt\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0902 13:44:07.560519       1 replica_set.go:536] sync \"replication-controller-1767/condition-test\" failed with pods \"condition-test-qkvdt\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0902 13:44:07.561849       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1767/condition-test\" need=3 creating=1\nI0902 13:44:07.563428       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1767/condition-test\nE0902 13:44:07.563485       1 replica_set.go:536] sync \"replication-controller-1767/condition-test\" failed with pods \"condition-test-njvpm\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0902 13:44:07.563560       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-njvpm\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0902 13:44:07.644153       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1767/condition-test\" need=3 creating=1\nI0902 13:44:07.646165       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1767/condition-test\nE0902 13:44:07.646211       1 replica_set.go:536] sync \"replication-controller-1767/condition-test\" failed with pods \"condition-test-gq8nt\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0902 13:44:07.646301       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-gq8nt\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0902 13:44:07.806654       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-1767/condition-test\" need=3 creating=1\nI0902 13:44:07.809099       1 replica_set.go:588] Slow-start failure. Skipping creation of 1 pods, decrementing expectations for ReplicationController replication-controller-1767/condition-test\nE0902 13:44:07.809148       1 replica_set.go:536] sync \"replication-controller-1767/condition-test\" failed with pods \"condition-test-r5frf\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0902 13:44:07.809191       1 event.go:294] \"Event occurred\" object=\"replication-controller-1767/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-r5frf\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0902 13:44:07.856546       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1618\nI0902 13:44:08.609457       1 namespace_controller.go:185] Namespace has been deleted ingressclass-9346\nI0902 13:44:09.307500       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0902 13:44:09.496966       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7521/e2e-test-webhook-w8hfk\" objectUID=535a2bee-36fe-4a1c-9159-d0f04cb4fd54 kind=\"EndpointSlice\" virtual=false\nI0902 13:44:09.502655       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7521/e2e-test-webhook-w8hfk\" objectUID=535a2bee-36fe-4a1c-9159-d0f04cb4fd54 kind=\"EndpointSlice\" propagationPolicy=Background\nE0902 13:44:09.595645       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-1062/pvc-p529c: storageclass.storage.k8s.io \"provisioning-1062\" not found\nI0902 13:44:09.595933       1 event.go:294] \"Event occurred\" object=\"provisioning-1062/pvc-p529c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1062\\\" not found\"\nI0902 13:44:09.615187       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7521/sample-webhook-deployment-78988fc6cd\" objectUID=c8ae0a6f-7392-45cc-916e-6248267039cb kind=\"ReplicaSet\" virtual=false\nI0902 13:44:09.615408       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-7521/sample-webhook-deployment\"\nI0902 13:44:09.617989       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7521/sample-webhook-deployment-78988fc6cd\" objectUID=c8ae0a6f-7392-45cc-916e-6248267039cb kind=\"ReplicaSet\" propagationPolicy=Background\nI0902 13:44:09.620938       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7521/sample-webhook-deployment-78988fc6cd-rt72n\" objectUID=075292d1-2619-44a8-aa3e-4bb390bc0a58 kind=\"Pod\" virtual=false\nI0902 13:44:09.622575       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7521/sample-webhook-deployment-78988fc6cd-rt72n\" objectUID=075292d1-2619-44a8-aa3e-4bb390bc0a58 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:09.629672       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-7521/sample-webhook-deployment-78988fc6cd-rt72n\" objectUID=03849405-89ca-4e2b-9a32-499c1f82391b kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:09.641705       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-7521/sample-webhook-deployment-78988fc6cd-rt72n\" objectUID=03849405-89ca-4e2b-9a32-499c1f82391b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:09.716697       1 pv_controller.go:879] volume \"local-jqbch\" entered phase \"Available\"\nI0902 13:44:09.890601       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5806-4162\nE0902 13:44:10.114390       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-2642/default: secrets \"default-token-s4j6p\" is forbidden: unable to create new content in namespace volume-expand-2642 because it is being terminated\nI0902 13:44:10.148405       1 namespace_controller.go:185] Namespace has been deleted volume-3640\nE0902 13:44:10.436394       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:10.464294       1 stateful_set_control.go:521] StatefulSet statefulset-9084/ss2 terminating Pod ss2-2 for scale down\nI0902 13:44:10.469920       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0902 13:44:10.495918       1 namespace_controller.go:185] Namespace has been deleted gc-9324\nI0902 13:44:10.719199       1 namespace_controller.go:185] Namespace has been deleted volume-8120\nE0902 13:44:10.723790       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4456/default: secrets \"default-token-5qv2z\" is forbidden: unable to create new content in namespace provisioning-4456 because it is being terminated\nI0902 13:44:10.764465       1 pv_controller.go:879] volume \"pvc-a4da1a6e-a0c7-40ac-9340-a37efb2d2cf8\" entered phase \"Bound\"\nI0902 13:44:10.764496       1 pv_controller.go:982] volume \"pvc-a4da1a6e-a0c7-40ac-9340-a37efb2d2cf8\" bound to claim \"statefulset-522/datadir-ss-0\"\nI0902 13:44:10.773138       1 pv_controller.go:823] claim \"statefulset-522/datadir-ss-0\" entered phase \"Bound\"\nI0902 13:44:10.835829       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" need=0 deleting=1\nI0902 13:44:10.835864       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0902 13:44:10.836077       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-9199/test-rolling-update-with-lb-864fb64577-htd9z\"\nI0902 13:44:10.837846       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 0\"\nI0902 13:44:10.854456       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-htd9z\"\nI0902 13:44:10.856005       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577-htd9z\" objectUID=3d67170e-0174-4744-b631-30f134f5ef0e kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:10.875846       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9199/test-rolling-update-with-lb-864fb64577-htd9z\" objectUID=3d67170e-0174-4744-b631-30f134f5ef0e kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0902 13:44:11.478002       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-6958/default: secrets \"default-token-cf5wl\" is forbidden: unable to create new content in namespace provisioning-6958 because it is being terminated\nI0902 13:44:11.492876       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-a4da1a6e-a0c7-40ac-9340-a37efb2d2cf8\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05c5f0d830f552310\") from node \"ip-172-20-45-138.eu-central-1.compute.internal\" \nE0902 13:44:11.693850       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-221/default: secrets \"default-token-cxqqp\" is forbidden: unable to create new content in namespace replication-controller-221 because it is being terminated\nI0902 13:44:11.918072       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-9723/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0902 13:44:11.918788       1 event.go:294] \"Event occurred\" object=\"webhook-9723/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0902 13:44:11.926104       1 event.go:294] \"Event occurred\" object=\"webhook-9723/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-bzk5b\"\nI0902 13:44:11.932761       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-9723/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:12.174131       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" need=1 creating=1\nI0902 13:44:12.175701       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 1\"\nI0902 13:44:12.181360       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-8vdf7\"\nI0902 13:44:12.186183       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:12.744468       1 event.go:294] \"Event occurred\" object=\"apply-8103/deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-585449566 to 3\"\nI0902 13:44:12.744752       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-8103/deployment-585449566\" need=3 creating=3\nI0902 13:44:12.756392       1 event.go:294] \"Event occurred\" object=\"apply-8103/deployment-585449566\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-585449566-2cxkn\"\nI0902 13:44:12.764651       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-8103/deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:12.765126       1 event.go:294] \"Event occurred\" object=\"apply-8103/deployment-585449566\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-585449566-65pj4\"\nI0902 13:44:12.765414       1 event.go:294] \"Event occurred\" object=\"apply-8103/deployment-585449566\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-585449566-zhl4w\"\nI0902 13:44:12.821931       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-8103/deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:12.849573       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"apply-8103/deployment-55649fd747\" need=1 creating=1\nI0902 13:44:12.850040       1 event.go:294] \"Event occurred\" object=\"apply-8103/deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set deployment-55649fd747 to 1\"\nI0902 13:44:12.857603       1 event.go:294] \"Event occurred\" object=\"apply-8103/deployment-55649fd747\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: deployment-55649fd747-cl7h4\"\nI0902 13:44:12.860888       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"apply-8103/deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:44:12.891123       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-3953/pvc-nzps4: storageclass.storage.k8s.io \"provisioning-3953\" not found\nI0902 13:44:12.891542       1 event.go:294] \"Event occurred\" object=\"provisioning-3953/pvc-nzps4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3953\\\" not found\"\nI0902 13:44:13.004800       1 pv_controller.go:879] volume \"local-2nt56\" entered phase \"Available\"\nI0902 13:44:13.262601       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-1767/condition-test-tkhzv\" objectUID=1c8a42ca-f3f3-4b23-b7d4-60b39fe6ed2b kind=\"Pod\" virtual=false\nI0902 13:44:13.262875       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-1767/condition-test-qkxdj\" objectUID=b9c0b9a9-d509-4c07-ae59-599846613060 kind=\"Pod\" virtual=false\nI0902 13:44:13.265259       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-1767/condition-test-tkhzv\" objectUID=1c8a42ca-f3f3-4b23-b7d4-60b39fe6ed2b kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:13.265512       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-1767/condition-test-qkxdj\" objectUID=b9c0b9a9-d509-4c07-ae59-599846613060 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:13.291680       1 garbagecollector.go:471] \"Processing object\" object=\"apply-8103/deployment-585449566\" objectUID=80d0c0dc-b274-46df-ac98-0067835f0235 kind=\"ReplicaSet\" virtual=false\nI0902 13:44:13.291855       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"apply-8103/deployment\"\nI0902 13:44:13.291943       1 garbagecollector.go:471] \"Processing object\" object=\"apply-8103/deployment-55649fd747\" objectUID=9162e5a5-71f0-48c5-b5a3-102defecdb08 kind=\"ReplicaSet\" virtual=false\nI0902 13:44:13.293871       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-8103/deployment-585449566\" objectUID=80d0c0dc-b274-46df-ac98-0067835f0235 kind=\"ReplicaSet\" propagationPolicy=Background\nI0902 13:44:13.295795       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-8103/deployment-55649fd747\" objectUID=9162e5a5-71f0-48c5-b5a3-102defecdb08 kind=\"ReplicaSet\" propagationPolicy=Background\nI0902 13:44:13.301425       1 garbagecollector.go:471] \"Processing object\" object=\"apply-8103/deployment-585449566-zhl4w\" objectUID=6d70d2d2-0a38-4fe1-ba73-e36eee15ec64 kind=\"Pod\" virtual=false\nI0902 13:44:13.301658       1 garbagecollector.go:471] \"Processing object\" object=\"apply-8103/deployment-585449566-2cxkn\" objectUID=86742e47-cad8-495e-af90-6e164bb7cc1a kind=\"Pod\" virtual=false\nI0902 13:44:13.301778       1 garbagecollector.go:471] \"Processing object\" object=\"apply-8103/deployment-585449566-65pj4\" objectUID=a30f9af2-0c4a-46c5-87b4-ec2813423371 kind=\"Pod\" virtual=false\nI0902 13:44:13.303327       1 garbagecollector.go:471] \"Processing object\" object=\"apply-8103/deployment-55649fd747-cl7h4\" objectUID=34abdd68-c870-4394-9677-03bc53fed00f kind=\"Pod\" virtual=false\nI0902 13:44:13.304871       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-8103/deployment-585449566-zhl4w\" objectUID=6d70d2d2-0a38-4fe1-ba73-e36eee15ec64 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:13.305051       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-8103/deployment-585449566-2cxkn\" objectUID=86742e47-cad8-495e-af90-6e164bb7cc1a kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:13.305215       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-8103/deployment-585449566-65pj4\" objectUID=a30f9af2-0c4a-46c5-87b4-ec2813423371 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:13.308732       1 garbagecollector.go:580] \"Deleting object\" object=\"apply-8103/deployment-55649fd747-cl7h4\" objectUID=34abdd68-c870-4394-9677-03bc53fed00f kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:13.335759       1 resource_quota_controller.go:307] Resource quota has been deleted replication-controller-1767/condition-test\nI0902 13:44:13.497508       1 event.go:294] \"Event occurred\" object=\"volume-expand-4507-5101/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0902 13:44:13.575624       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-7129/pvc-6b8qk\"\nI0902 13:44:13.582358       1 pv_controller.go:640] volume \"nfs-4j9tx\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:44:13.586069       1 pv_controller.go:879] volume \"nfs-4j9tx\" entered phase \"Released\"\nI0902 13:44:13.789615       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-4814/aws22w5c\"\nI0902 13:44:13.798491       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-a4da1a6e-a0c7-40ac-9340-a37efb2d2cf8\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05c5f0d830f552310\") from node \"ip-172-20-45-138.eu-central-1.compute.internal\" \nI0902 13:44:13.798725       1 event.go:294] \"Event occurred\" object=\"statefulset-522/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-a4da1a6e-a0c7-40ac-9340-a37efb2d2cf8\\\" \"\nI0902 13:44:13.800248       1 pv_controller.go:640] volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:44:13.803426       1 pv_controller.go:879] volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" entered phase \"Released\"\nI0902 13:44:13.805232       1 pv_controller.go:1340] isVolumeReleased[pvc-480c953b-70ce-407b-ad71-530268a99d34]: volume is released\nI0902 13:44:13.813005       1 event.go:294] \"Event occurred\" object=\"volume-expand-4507/csi-hostpathhm69w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-4507\\\" or manually created by system administrator\"\nI0902 13:44:13.813162       1 event.go:294] \"Event occurred\" object=\"volume-expand-4507/csi-hostpathhm69w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-4507\\\" or manually created by system administrator\"\nI0902 13:44:14.016654       1 pv_controller_base.go:505] deletion of claim \"pv-7129/pvc-6b8qk\" was already processed\nE0902 13:44:14.234252       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-3072/pvc-kdhw6: storageclass.storage.k8s.io \"volume-3072\" not found\nI0902 13:44:14.234801       1 event.go:294] \"Event occurred\" object=\"volume-3072/pvc-kdhw6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-3072\\\" not found\"\nE0902 13:44:14.316177       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-7521/default: secrets \"default-token-rjkmn\" is forbidden: unable to create new content in namespace webhook-7521 because it is being terminated\nI0902 13:44:14.355884       1 pv_controller.go:879] volume \"local-glbc2\" entered phase \"Available\"\nE0902 13:44:14.360672       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-3750/default: secrets \"default-token-tvxct\" is forbidden: unable to create new content in namespace configmap-3750 because it is being terminated\nI0902 13:44:14.446447       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f40356ec66ad3c24\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:14.449925       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f40356ec66ad3c24\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nE0902 13:44:14.506315       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-7521-markers/default: secrets \"default-token-6lkc9\" is forbidden: unable to create new content in namespace webhook-7521-markers because it is being terminated\nI0902 13:44:14.598257       1 garbagecollector.go:471] \"Processing object\" object=\"conntrack-178/pod-client\" objectUID=0ca972cc-fd58-4ffa-b31c-5667b65eb308 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:14.604706       1 garbagecollector.go:580] \"Deleting object\" object=\"conntrack-178/pod-client\" objectUID=0ca972cc-fd58-4ffa-b31c-5667b65eb308 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:14.614014       1 garbagecollector.go:471] \"Processing object\" object=\"conntrack-178/pod-server-2\" objectUID=e54200fe-27cb-4748-8987-2c2934d2fe24 kind=\"CiliumEndpoint\" virtual=false\nW0902 13:44:14.614335       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"conntrack-178/svc-udp\", retrying. Error: EndpointSlice informer cache is out of date\nI0902 13:44:14.628488       1 garbagecollector.go:580] \"Deleting object\" object=\"conntrack-178/pod-server-2\" objectUID=e54200fe-27cb-4748-8987-2c2934d2fe24 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:14.741834       1 garbagecollector.go:471] \"Processing object\" object=\"conntrack-178/svc-udp-82fgq\" objectUID=af968895-c725-48ab-a87e-a169d6cdee7a kind=\"EndpointSlice\" virtual=false\nI0902 13:44:14.744798       1 garbagecollector.go:580] \"Deleting object\" object=\"conntrack-178/svc-udp-82fgq\" objectUID=af968895-c725-48ab-a87e-a169d6cdee7a kind=\"EndpointSlice\" propagationPolicy=Background\nI0902 13:44:14.906503       1 namespace_controller.go:185] Namespace has been deleted projected-5217\nI0902 13:44:15.229661       1 namespace_controller.go:185] Namespace has been deleted volume-expand-2642\nE0902 13:44:15.332611       1 tokens_controller.go:262] error synchronizing serviceaccount fsgroupchangepolicy-5411/default: secrets \"default-token-9hhqr\" is forbidden: unable to create new content in namespace fsgroupchangepolicy-5411 because it is being terminated\nI0902 13:44:15.389775       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-2642-3929/csi-hostpathplugin-759777569d\" objectUID=d1f69f7c-6a24-4ed7-9b2a-4f8553f8c44f kind=\"ControllerRevision\" virtual=false\nI0902 13:44:15.389971       1 stateful_set.go:440] StatefulSet has been deleted volume-expand-2642-3929/csi-hostpathplugin\nI0902 13:44:15.390044       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-2642-3929/csi-hostpathplugin-0\" objectUID=7d96a124-d411-4b85-85f1-3394b05abaa3 kind=\"Pod\" virtual=false\nI0902 13:44:15.393305       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-2642-3929/csi-hostpathplugin-759777569d\" objectUID=d1f69f7c-6a24-4ed7-9b2a-4f8553f8c44f kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:44:15.393928       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-2642-3929/csi-hostpathplugin-0\" objectUID=7d96a124-d411-4b85-85f1-3394b05abaa3 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:15.656515       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-4489/pvc-nwshn\"\nI0902 13:44:15.662932       1 pv_controller.go:640] volume \"pvc-5b63f468-e842-42ef-ab41-972828857a76\" is released and reclaim policy \"Delete\" will be executed\nI0902 13:44:15.667581       1 pv_controller.go:879] volume \"pvc-5b63f468-e842-42ef-ab41-972828857a76\" entered phase \"Released\"\nI0902 13:44:15.670263       1 pv_controller.go:1340] isVolumeReleased[pvc-5b63f468-e842-42ef-ab41-972828857a76]: volume is released\nI0902 13:44:15.678107       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-4489/pvc-nwshn\" was already processed\nI0902 13:44:16.002457       1 namespace_controller.go:185] Namespace has been deleted provisioning-4456\nI0902 13:44:16.221370       1 stateful_set_control.go:555] StatefulSet statefulset-7854/ss2 terminating Pod ss2-1 for update\nI0902 13:44:16.229438       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0902 13:44:16.678674       1 namespace_controller.go:185] Namespace has been deleted provisioning-6958\nE0902 13:44:16.719912       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:16.798311       1 namespace_controller.go:185] Namespace has been deleted replication-controller-221\nI0902 13:44:17.025421       1 stateful_set_control.go:521] StatefulSet statefulset-9084/ss2 terminating Pod ss2-1 for scale down\nI0902 13:44:17.035264       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0902 13:44:18.048949       1 namespace_controller.go:185] Namespace has been deleted provisioning-5356\nE0902 13:44:18.234808       1 tokens_controller.go:262] error synchronizing serviceaccount apply-8103/default: serviceaccounts \"default\" not found\nI0902 13:44:18.420218       1 namespace_controller.go:185] Namespace has been deleted replication-controller-1767\nI0902 13:44:18.629090       1 pv_controller.go:879] volume \"pvc-970c72d6-b7d4-45f3-9641-9b0aa01b1603\" entered phase \"Bound\"\nI0902 13:44:18.629171       1 pv_controller.go:982] volume \"pvc-970c72d6-b7d4-45f3-9641-9b0aa01b1603\" bound to claim \"volume-expand-4507/csi-hostpathhm69w\"\nI0902 13:44:18.637461       1 pv_controller.go:823] claim \"volume-expand-4507/csi-hostpathhm69w\" entered phase \"Bound\"\nI0902 13:44:18.798770       1 pv_controller.go:930] claim \"provisioning-1062/pvc-p529c\" bound to volume \"local-jqbch\"\nI0902 13:44:18.801776       1 pv_controller.go:1340] isVolumeReleased[pvc-480c953b-70ce-407b-ad71-530268a99d34]: volume is released\nI0902 13:44:18.805946       1 pv_controller.go:879] volume \"local-jqbch\" entered phase \"Bound\"\nI0902 13:44:18.805978       1 pv_controller.go:982] volume \"local-jqbch\" bound to claim \"provisioning-1062/pvc-p529c\"\nI0902 13:44:18.815216       1 pv_controller.go:823] claim \"provisioning-1062/pvc-p529c\" entered phase \"Bound\"\nI0902 13:44:18.815465       1 pv_controller.go:930] claim \"provisioning-7632/pvc-865x5\" bound to volume \"local-6vs2d\"\nI0902 13:44:18.822703       1 pv_controller.go:879] volume \"local-6vs2d\" entered phase \"Bound\"\nI0902 13:44:18.822729       1 pv_controller.go:982] volume \"local-6vs2d\" bound to claim \"provisioning-7632/pvc-865x5\"\nI0902 13:44:18.828720       1 pv_controller.go:823] claim \"provisioning-7632/pvc-865x5\" entered phase \"Bound\"\nI0902 13:44:18.829609       1 pv_controller.go:930] claim \"provisioning-3953/pvc-nzps4\" bound to volume \"local-2nt56\"\nI0902 13:44:18.836083       1 pv_controller.go:879] volume \"local-2nt56\" entered phase \"Bound\"\nI0902 13:44:18.836109       1 pv_controller.go:982] volume \"local-2nt56\" bound to claim \"provisioning-3953/pvc-nzps4\"\nI0902 13:44:18.845797       1 pv_controller.go:823] claim \"provisioning-3953/pvc-nzps4\" entered phase \"Bound\"\nI0902 13:44:18.845994       1 pv_controller.go:930] claim \"volume-3072/pvc-kdhw6\" bound to volume \"local-glbc2\"\nI0902 13:44:18.858004       1 pv_controller.go:879] volume \"local-glbc2\" entered phase \"Bound\"\nI0902 13:44:18.858032       1 pv_controller.go:982] volume \"local-glbc2\" bound to claim \"volume-3072/pvc-kdhw6\"\nI0902 13:44:18.866234       1 pv_controller.go:823] claim \"volume-3072/pvc-kdhw6\" entered phase \"Bound\"\nI0902 13:44:19.537228       1 namespace_controller.go:185] Namespace has been deleted webhook-7521\nI0902 13:44:19.581878       1 namespace_controller.go:185] Namespace has been deleted configmap-3750\nI0902 13:44:19.656545       1 namespace_controller.go:185] Namespace has been deleted webhook-7521-markers\nI0902 13:44:19.773489       1 namespace_controller.go:185] Namespace has been deleted conntrack-178\nI0902 13:44:19.972495       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nI0902 13:44:19.972749       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" need=2 deleting=1\nI0902 13:44:19.972901       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0902 13:44:19.974576       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-7qjlr\"\nI0902 13:44:19.983046       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 2\"\nI0902 13:44:19.983490       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" need=2 creating=1\nI0902 13:44:19.990938       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:19.992463       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-7qjlr\"\nI0902 13:44:19.993628       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-7qjlr\" objectUID=59e61c11-09a2-4394-aaf2-c5ead6c83b5d kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:19.997630       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-4cjk7\"\nI0902 13:44:20.101640       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-7qjlr\" objectUID=59e61c11-09a2-4394-aaf2-c5ead6c83b5d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:20.305111       1 namespace_controller.go:185] Namespace has been deleted volume-7494\nI0902 13:44:20.309973       1 event.go:294] \"Event occurred\" object=\"statefulset-6610/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-0 in StatefulSet test-ss successful\"\nI0902 13:44:20.508359       1 namespace_controller.go:185] Namespace has been deleted fsgroupchangepolicy-5411\nI0902 13:44:20.610239       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-970c72d6-b7d4-45f3-9641-9b0aa01b1603\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-4507^dee70ac5-0bf3-11ec-8ab4-b20c93327bb3\") from node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nE0902 13:44:20.719397       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0902 13:44:20.858691       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-2642-3929/default: secrets \"default-token-q8pb9\" is forbidden: unable to create new content in namespace volume-expand-2642-3929 because it is being terminated\nI0902 13:44:20.886821       1 namespace_controller.go:185] Namespace has been deleted prestop-3614\nI0902 13:44:20.967104       1 stateful_set_control.go:521] StatefulSet statefulset-9084/ss2 terminating Pod ss2-0 for scale down\nI0902 13:44:20.973011       1 event.go:294] \"Event occurred\" object=\"statefulset-9084/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0902 13:44:21.157616       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-970c72d6-b7d4-45f3-9641-9b0aa01b1603\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-4507^dee70ac5-0bf3-11ec-8ab4-b20c93327bb3\") from node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:21.158557       1 event.go:294] \"Event occurred\" object=\"volume-expand-4507/pod-188e1837-c378-44ac-bf59-b77e28383027\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-970c72d6-b7d4-45f3-9641-9b0aa01b1603\\\" \"\nI0902 13:44:21.177396       1 pv_controller.go:1340] isVolumeReleased[pvc-480c953b-70ce-407b-ad71-530268a99d34]: volume is released\nI0902 13:44:21.346220       1 pv_controller_base.go:505] deletion of claim \"volume-4814/aws22w5c\" was already processed\nI0902 13:44:22.001648       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-480c953b-70ce-407b-ad71-530268a99d34\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f40356ec66ad3c24\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:22.213771       1 namespace_controller.go:185] Namespace has been deleted emptydir-9682\nI0902 13:44:22.648730       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1618-9736\nI0902 13:44:22.795264       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0902 13:44:23.990146       1 garbagecollector.go:471] \"Processing object\" object=\"services-9188/verify-service-up-exec-pod-hglnj\" objectUID=6d85a462-c410-4d57-a5e9-70b27e29db44 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:23.994163       1 garbagecollector.go:580] \"Deleting object\" object=\"services-9188/verify-service-up-exec-pod-hglnj\" objectUID=6d85a462-c410-4d57-a5e9-70b27e29db44 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:24.935800       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" need=1 deleting=1\nI0902 13:44:24.935885       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0902 13:44:24.935985       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-rs5m2\"\nI0902 13:44:24.936793       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 1\"\nI0902 13:44:24.950012       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" need=3 creating=1\nI0902 13:44:24.951105       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 3\"\nI0902 13:44:24.966204       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:24.968315       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-8hghg\"\nI0902 13:44:24.969741       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-rs5m2\"\nI0902 13:44:24.974280       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-rs5m2\" objectUID=3f5f4e64-1953-4f89-9f55-7b88d4a546cb kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:24.984130       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-rs5m2\" objectUID=3f5f4e64-1953-4f89-9f55-7b88d4a546cb kind=\"CiliumEndpoint\" propagationPolicy=Background\nW0902 13:44:25.001601       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-9199/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0902 13:44:25.012023       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-3953/pvc-nzps4\"\nI0902 13:44:25.017323       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:25.028649       1 pv_controller.go:640] volume \"local-2nt56\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:44:25.032147       1 pv_controller.go:879] volume \"local-2nt56\" entered phase \"Released\"\nI0902 13:44:25.122048       1 pv_controller_base.go:505] deletion of claim \"provisioning-3953/pvc-nzps4\" was already processed\nI0902 13:44:25.354248       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4489-6513/csi-mockplugin-0\" objectUID=184e8eba-9634-4b92-8f4f-6bbeed553683 kind=\"Pod\" virtual=false\nI0902 13:44:25.354449       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4489-6513/csi-mockplugin\nI0902 13:44:25.354487       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4489-6513/csi-mockplugin-6bdb5c8ddd\" objectUID=7f137463-cb8e-4861-942d-9c643bc61ea3 kind=\"ControllerRevision\" virtual=false\nI0902 13:44:25.357529       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4489-6513/csi-mockplugin-6bdb5c8ddd\" objectUID=7f137463-cb8e-4861-942d-9c643bc61ea3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:44:25.357918       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4489-6513/csi-mockplugin-0\" objectUID=184e8eba-9634-4b92-8f4f-6bbeed553683 kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:25.914560       1 namespace_controller.go:185] Namespace has been deleted volume-expand-2642-3929\nI0902 13:44:26.084203       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-6091/ss-0\" objectUID=428f4701-ed70-4343-9ff2-73bcc100346c kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:26.109434       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-6091/ss-0\" objectUID=428f4701-ed70-4343-9ff2-73bcc100346c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:26.130749       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0902 13:44:26.146375       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4489\nI0902 13:44:26.158738       1 event.go:294] \"Event occurred\" object=\"statefulset-6091/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nW0902 13:44:26.158935       1 reconciler.go:335] Multi-Attach error for volume \"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-002a09058d0b38e43\") from node \"ip-172-20-45-138.eu-central-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-42-46.eu-central-1.compute.internal and can't be attached to another\nI0902 13:44:26.325756       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" need=0 deleting=1\nI0902 13:44:26.325790       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0902 13:44:26.325931       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-j9h7m\"\nI0902 13:44:26.326480       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 0\"\nI0902 13:44:26.343914       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-j9h7m\" objectUID=ac1f2cca-655f-419a-92c3-5dc6e3775321 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:26.344250       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-j9h7m\"\nI0902 13:44:26.358778       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9199/test-rolling-update-with-lb-5ff6986c95-j9h7m\" objectUID=ac1f2cca-655f-419a-92c3-5dc6e3775321 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:26.952527       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-686dff95d9\" need=1 creating=1\nI0902 13:44:26.955407       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 1\"\nI0902 13:44:26.966937       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:44:26.983299       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-5684/default: secrets \"default-token-r2zgs\" is forbidden: unable to create new content in namespace nettest-5684 because it is being terminated\nI0902 13:44:26.984470       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-nq55v\"\nI0902 13:44:27.079845       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-9062, name: inline-volume-tester2-qx2wp, uid: 4b591ff9-158c-4d09-8006-ac3589796d9a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:44:27.081862       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9062/inline-volume-tester2-qx2wp\" objectUID=a0954da5-34aa-4b54-8bfc-91553673c7bd kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:27.082409       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9062/inline-volume-tester2-qx2wp\" objectUID=4b591ff9-158c-4d09-8006-ac3589796d9a kind=\"Pod\" virtual=false\nI0902 13:44:27.098540       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-9062, name: inline-volume-tester2-qx2wp, uid: a0954da5-34aa-4b54-8bfc-91553673c7bd] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-9062, name: inline-volume-tester2-qx2wp, uid: 4b591ff9-158c-4d09-8006-ac3589796d9a] is deletingDependents\nI0902 13:44:27.101436       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9062/inline-volume-tester2-qx2wp\" objectUID=a0954da5-34aa-4b54-8bfc-91553673c7bd kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:27.109209       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9062/inline-volume-tester2-qx2wp\" objectUID=a0954da5-34aa-4b54-8bfc-91553673c7bd kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:27.109458       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9062/inline-volume-tester2-qx2wp\" objectUID=4b591ff9-158c-4d09-8006-ac3589796d9a kind=\"Pod\" virtual=false\nI0902 13:44:27.112394       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-9062, name: inline-volume-tester2-qx2wp, uid: 4b591ff9-158c-4d09-8006-ac3589796d9a]\nE0902 13:44:28.161383       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nE0902 13:44:28.283885       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nE0902 13:44:28.406661       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nI0902 13:44:28.514031       1 event.go:294] \"Event occurred\" object=\"provisioning-5942/awsscs7w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0902 13:44:28.563816       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nI0902 13:44:28.753360       1 event.go:294] \"Event occurred\" object=\"provisioning-5942/awsscs7w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE0902 13:44:28.758474       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nE0902 13:44:28.948410       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nE0902 13:44:29.246073       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nE0902 13:44:29.633694       1 tokens_controller.go:262] error synchronizing serviceaccount volume-4814/default: secrets \"default-token-fzwkz\" is forbidden: unable to create new content in namespace volume-4814 because it is being terminated\nE0902 13:44:29.739366       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nI0902 13:44:29.837444       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" need=2 deleting=1\nI0902 13:44:29.837566       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-686dff95d9]\nI0902 13:44:29.837692       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4-8hghg\"\nI0902 13:44:29.838564       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 2\"\nI0902 13:44:29.850623       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 2\"\nI0902 13:44:29.853306       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-686dff95d9\" need=2 creating=1\nI0902 13:44:29.854166       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4-8hghg\" objectUID=0dd91e51-72df-417e-a4d7-95b789e28c38 kind=\"CiliumEndpoint\" virtual=false\nW0902 13:44:29.862559       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-9199/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0902 13:44:29.862819       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-8hghg\"\nI0902 13:44:29.868020       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-npkq2\"\nI0902 13:44:29.870915       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4-8hghg\" objectUID=0dd91e51-72df-417e-a4d7-95b789e28c38 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0902 13:44:29.876237       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:29.878983       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint deployment-9199/test-rolling-update-with-lb: Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:29.917563       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:30.151706       1 namespace_controller.go:185] Namespace has been deleted projected-8495\nI0902 13:44:30.169346       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1062/pvc-p529c\"\nI0902 13:44:30.174433       1 pv_controller.go:640] volume \"local-jqbch\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:44:30.177974       1 pv_controller.go:879] volume \"local-jqbch\" entered phase \"Released\"\nI0902 13:44:30.282439       1 pv_controller_base.go:505] deletion of claim \"provisioning-1062/pvc-p529c\" was already processed\nE0902 13:44:30.304298       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-4152/pvc-gx98k: storageclass.storage.k8s.io \"volumemode-4152\" not found\nI0902 13:44:30.304609       1 event.go:294] \"Event occurred\" object=\"volumemode-4152/pvc-gx98k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-4152\\\" not found\"\nE0902 13:44:30.366007       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0902 13:44:30.502392       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nI0902 13:44:30.523817       1 pv_controller.go:879] volume \"aws-jd2w8\" entered phase \"Available\"\nI0902 13:44:30.536486       1 stateful_set_control.go:555] StatefulSet statefulset-7854/ss2 terminating Pod ss2-0 for update\nI0902 13:44:30.548514       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nW0902 13:44:30.558418       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-7854/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0902 13:44:30.869500       1 event.go:294] \"Event occurred\" object=\"statefulset-6610/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-1 in StatefulSet test-ss successful\"\nI0902 13:44:30.906905       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-9084/ss2-5bbbc9fc94\" objectUID=2f4909f9-4d36-4aef-9039-3ef3966e66e6 kind=\"ControllerRevision\" virtual=false\nI0902 13:44:30.907020       1 stateful_set.go:440] StatefulSet has been deleted statefulset-9084/ss2\nI0902 13:44:30.907088       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-9084/ss2-677d6db895\" objectUID=b7118510-d77a-4f94-bee8-d5939c612855 kind=\"ControllerRevision\" virtual=false\nI0902 13:44:30.909684       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-9084/ss2-5bbbc9fc94\" objectUID=2f4909f9-4d36-4aef-9039-3ef3966e66e6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0902 13:44:30.910139       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-9084/ss2-677d6db895\" objectUID=b7118510-d77a-4f94-bee8-d5939c612855 kind=\"ControllerRevision\" propagationPolicy=Background\nE0902 13:44:30.992768       1 tokens_controller.go:262] error synchronizing serviceaccount certificates-2732/default: secrets \"default-token-sgdt4\" is forbidden: unable to create new content in namespace certificates-2732 because it is being terminated\nI0902 13:44:31.778848       1 namespace_controller.go:185] Namespace has been deleted podtemplate-6502\nI0902 13:44:31.862238       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7632/pvc-865x5\"\nI0902 13:44:31.870835       1 pv_controller.go:640] volume \"local-6vs2d\" is released and reclaim policy \"Retain\" will be executed\nI0902 13:44:31.875229       1 pv_controller.go:879] volume \"local-6vs2d\" entered phase \"Released\"\nE0902 13:44:31.924763       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nI0902 13:44:31.972708       1 pv_controller_base.go:505] deletion of claim \"provisioning-7632/pvc-865x5\" was already processed\nI0902 13:44:32.085681       1 pv_controller.go:879] volume \"pvc-a0c7d73e-3a64-4699-9d15-4703ae82dff3\" entered phase \"Bound\"\nI0902 13:44:32.085765       1 pv_controller.go:982] volume \"pvc-a0c7d73e-3a64-4699-9d15-4703ae82dff3\" bound to claim \"provisioning-5942/awsscs7w\"\nI0902 13:44:32.098273       1 pv_controller.go:823] claim \"provisioning-5942/awsscs7w\" entered phase \"Bound\"\nI0902 13:44:32.168050       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-2903/awswthv6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0902 13:44:32.396777       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-2903/awswthv6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:44:32.634813       1 garbagecollector.go:471] \"Processing object\" object=\"endpointslicemirroring-5556/example-custom-endpoints-5sr8t\" objectUID=1b7c779e-fedb-497b-8b4d-ee7cad73c85e kind=\"EndpointSlice\" virtual=false\nI0902 13:44:32.638521       1 garbagecollector.go:580] \"Deleting object\" object=\"endpointslicemirroring-5556/example-custom-endpoints-5sr8t\" objectUID=1b7c779e-fedb-497b-8b4d-ee7cad73c85e kind=\"EndpointSlice\" propagationPolicy=Background\nE0902 13:44:32.641093       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"discovery.k8s.io/v1\", Kind:\"EndpointSlice\", Name:\"example-custom-endpoints-5sr8t\", UID:\"1b7c779e-fedb-497b-8b4d-ee7cad73c85e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"endpointslicemirroring-5556\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Endpoints\", Name:\"example-custom-endpoints\", UID:\"715b2536-e01a-47d4-aec9-70082225ec33\", Controller:(*bool)(0x4002a52f0c), BlockOwnerDeletion:(*bool)(0x4002a52f0d)}}}: endpointslices.discovery.k8s.io \"example-custom-endpoints-5sr8t\" not found\nI0902 13:44:32.646733       1 garbagecollector.go:471] \"Processing object\" object=\"endpointslicemirroring-5556/example-custom-endpoints-5sr8t\" objectUID=1b7c779e-fedb-497b-8b4d-ee7cad73c85e kind=\"EndpointSlice\" virtual=false\nI0902 13:44:32.840463       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-a0c7d73e-3a64-4699-9d15-4703ae82dff3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05e1c50bb3ad7f9c0\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:44:33.591693       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3318/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0902 13:44:33.592026       1 event.go:294] \"Event occurred\" object=\"webhook-3318/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0902 13:44:33.604515       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3318/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:33.604853       1 event.go:294] \"Event occurred\" object=\"webhook-3318/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-ttwsr\"\nI0902 13:44:33.651574       1 event.go:294] \"Event occurred\" object=\"webhook-9471/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0902 13:44:33.651877       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-9471/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0902 13:44:33.661621       1 event.go:294] \"Event occurred\" object=\"webhook-9471/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-2vgg5\"\nI0902 13:44:33.669743       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-9471/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0902 13:44:33.800772       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:33.801245       1 pv_controller.go:930] claim \"volumemode-4152/pvc-gx98k\" bound to volume \"aws-jd2w8\"\nI0902 13:44:33.806195       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"gc-5560/simpletest-rc-to-be-deleted\" need=10 creating=10\nI0902 13:44:33.870774       1 pv_controller.go:879] volume \"aws-jd2w8\" entered phase \"Bound\"\nI0902 13:44:33.871178       1 pv_controller.go:982] volume \"aws-jd2w8\" bound to claim \"volumemode-4152/pvc-gx98k\"\nI0902 13:44:33.878419       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-bjb8l\"\nE0902 13:44:33.928955       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:33.950284       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-k6njn\"\nI0902 13:44:33.950868       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-jcgws\"\nI0902 13:44:34.002755       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-z8bjh\"\nI0902 13:44:34.006175       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-dm5c2\"\nI0902 13:44:34.006488       1 pv_controller.go:823] claim \"volumemode-4152/pvc-gx98k\" entered phase \"Bound\"\nI0902 13:44:34.007137       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-2903/awswthv6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0902 13:44:34.017719       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-98wvt\"\nI0902 13:44:34.019208       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-bw6q9\"\nI0902 13:44:34.037921       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-kq2cp\"\nI0902 13:44:34.038120       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-v2ph9\"\nI0902 13:44:34.038329       1 event.go:294] \"Event occurred\" object=\"gc-5560/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-hkzkt\"\nE0902 13:44:34.418104       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-7132/pvc-lkbd5: storageclass.storage.k8s.io \"volume-7132\" not found\nI0902 13:44:34.418821       1 event.go:294] \"Event occurred\" object=\"volume-7132/pvc-lkbd5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-7132\\\" not found\"\nE0902 13:44:34.438366       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-5059/default: secrets \"default-token-gk86z\" is forbidden: unable to create new content in namespace configmap-5059 because it is being terminated\nI0902 13:44:34.502186       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-002a09058d0b38e43\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:34.546537       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-193aad23-26c2-4c57-a19a-10bc0f06bcae\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-002a09058d0b38e43\") on node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:34.558277       1 pv_controller.go:879] volume \"local-jmgm8\" entered phase \"Available\"\nI0902 13:44:34.678689       1 namespace_controller.go:185] Namespace has been deleted volume-4814\nI0902 13:44:34.822063       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-4316/affinity-nodeport-timeout\" need=3 creating=3\nI0902 13:44:34.935199       1 event.go:294] \"Event occurred\" object=\"services-4316/affinity-nodeport-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-timeout-5bw9c\"\nI0902 13:44:35.006013       1 event.go:294] \"Event occurred\" object=\"services-4316/affinity-nodeport-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-timeout-cm4lx\"\nI0902 13:44:35.006046       1 event.go:294] \"Event occurred\" object=\"services-4316/affinity-nodeport-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-nodeport-timeout-2znsd\"\nI0902 13:44:35.633924       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-jd2w8\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03cf49eb9235abb92\") from node \"ip-172-20-42-46.eu-central-1.compute.internal\" \nI0902 13:44:35.765908       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4489-6513\nE0902 13:44:35.782333       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:35.958669       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [mygroup.example.com/v1beta1, Resource=foo296w4as mygroup.example.com/v1beta1, Resource=foo2gtt5as mygroup.example.com/v1beta1, Resource=foo5xqgwas mygroup.example.com/v1beta1, Resource=foognwj6as mygroup.example.com/v1beta1, Resource=foorlsw8as mygroup.example.com/v1beta1, Resource=foowj8zdas], removed: []\nI0902 13:44:36.032041       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nW0902 13:44:36.106102       1 utils.go:265] Service services-9188/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0902 13:44:36.129435       1 namespace_controller.go:185] Namespace has been deleted certificates-2732\nI0902 13:44:36.528205       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" need=1 deleting=1\nI0902 13:44:36.528414       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-686dff95d9 test-rolling-update-with-lb-864fb64577]\nI0902 13:44:36.528572       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4-8vdf7\"\nI0902 13:44:36.532046       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 1\"\nI0902 13:44:36.603426       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9199/test-rolling-update-with-lb-686dff95d9\" need=3 creating=1\nI0902 13:44:36.604098       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-8vdf7\"\nI0902 13:44:36.604128       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 3\"\nI0902 13:44:36.629553       1 event.go:294] \"Event occurred\" object=\"deployment-9199/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-876rb\"\nI0902 13:44:36.814224       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9199/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:36.933634       1 shared_informer.go:247] Caches are synced for garbage collector \nI0902 13:44:36.933724       1 garbagecollector.go:254] synced garbage collector\nI0902 13:44:36.933769       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9199/test-rolling-update-with-lb-59c4fc87b4-8vdf7\" objectUID=d692f883-bb84-43f2-b0ad-6c7ebb9b58dc kind=\"CiliumEndpoint\" virtual=false\nW0902 13:44:37.062376       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-7854/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0902 13:44:37.063065       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-a0c7d73e-3a64-4699-9d15-4703ae82dff3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05e1c50bb3ad7f9c0\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:44:37.063360       1 event.go:294] \"Event occurred\" object=\"provisioning-5942/pod-subpath-test-dynamicpv-29wn\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-a0c7d73e-3a64-4699-9d15-4703ae82dff3\\\" \"\nI0902 13:44:37.074470       1 event.go:294] \"Event occurred\" object=\"statefulset-7854/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nE0902 13:44:37.257807       1 namespace_controller.go:162] deletion of namespace disruption-6190 failed: unexpected items still remain in namespace: disruption-6190 for gvr: /v1, Resource=pods\nI0902 13:44:37.264932       1 pv_controller.go:879] volume \"pvc-d3c5107c-3f2c-4e08-bd36-493ed76452e0\" entered phase \"Bound\"\nI0902 13:44:37.265042       1 pv_controller.go:982] volume \"pvc-d3c5107c-3f2c-4e08-bd36-493ed76452e0\" bound to claim \"fsgroupchangepolicy-2903/awswthv6\"\nI0902 13:44:37.288649       1 pv_controller.go:823] claim \"fsgroupchangepolicy-2903/awswthv6\" entered phase \"Bound\"\nI0902 13:44:37.502704       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-d3c5107c-3f2c-4e08-bd36-493ed76452e0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-066e68e44bffbb4b2\") from node \"ip-172-20-61-191.eu-central-1.compute.internal\" \nI0902 13:44:37.895040       1 stateful_set_control.go:555] StatefulSet statefulset-6610/test-ss terminating Pod test-ss-0 for update\nI0902 13:44:37.973738       1 event.go:294] \"Event occurred\" object=\"statefulset-6610/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod test-ss-0 in StatefulSet test-ss successful\"\nI0902 13:44:37.995423       1 namespace_controller.go:185] Namespace has been deleted provisioning-3953\nE0902 13:44:38.031041       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0902 13:44:38.106848       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:38.603072       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9723/e2e-test-webhook-qskfv\" objectUID=d0427581-28b1-44ae-9cd0-98ae2d7a36bb kind=\"EndpointSlice\" virtual=false\nI0902 13:44:38.681612       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9723/e2e-test-webhook-qskfv\" objectUID=d0427581-28b1-44ae-9cd0-98ae2d7a36bb kind=\"EndpointSlice\" propagationPolicy=Background\nE0902 13:44:38.725729       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1062/default: secrets \"default-token-8n9t2\" is forbidden: unable to create new content in namespace provisioning-1062 because it is being terminated\nI0902 13:44:38.740774       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9723/sample-webhook-deployment-78988fc6cd\" objectUID=056f8d09-d2f0-4e58-ab1d-c93bda091f67 kind=\"ReplicaSet\" virtual=false\nI0902 13:44:38.741332       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-9723/sample-webhook-deployment\"\nI0902 13:44:38.751658       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9723/sample-webhook-deployment-78988fc6cd\" objectUID=056f8d09-d2f0-4e58-ab1d-c93bda091f67 kind=\"ReplicaSet\" propagationPolicy=Background\nI0902 13:44:38.766760       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9723/sample-webhook-deployment-78988fc6cd-bzk5b\" objectUID=9cf0c69c-6b31-4904-b973-9fd95f15086d kind=\"Pod\" virtual=false\nI0902 13:44:38.786485       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9723/sample-webhook-deployment-78988fc6cd-bzk5b\" objectUID=9cf0c69c-6b31-4904-b973-9fd95f15086d kind=\"Pod\" propagationPolicy=Background\nI0902 13:44:38.861964       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-9723/sample-webhook-deployment-78988fc6cd-bzk5b\" objectUID=e82630a7-f952-411d-ab73-3f53a47b8cb5 kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:38.869128       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-9723/sample-webhook-deployment-78988fc6cd-bzk5b\" objectUID=e82630a7-f952-411d-ab73-3f53a47b8cb5 kind=\"CiliumEndpoint\" propagationPolicy=Background\nW0902 13:44:39.126841       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-4316/affinity-nodeport-timeout\", retrying. Error: EndpointSlice informer cache is out of date\nI0902 13:44:39.140926       1 pv_controller.go:879] volume \"local-pvwjfqs\" entered phase \"Available\"\nI0902 13:44:39.159515       1 pv_controller.go:930] claim \"persistent-local-volumes-test-2047/pvc-f7bch\" bound to volume \"local-pvwjfqs\"\nI0902 13:44:39.172299       1 namespace_controller.go:185] Namespace has been deleted pv-7129\nI0902 13:44:39.211761       1 pv_controller.go:879] volume \"local-pvwjfqs\" entered phase \"Bound\"\nI0902 13:44:39.211938       1 pv_controller.go:982] volume \"local-pvwjfqs\" bound to claim \"persistent-local-volumes-test-2047/pvc-f7bch\"\nI0902 13:44:39.287032       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"services-4316/affinity-nodeport-timeout\" err=\"Operation cannot be fulfilled on endpoints \\\"affinity-nodeport-timeout\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:39.287169       1 pv_controller.go:823] claim \"persistent-local-volumes-test-2047/pvc-f7bch\" entered phase \"Bound\"\nI0902 13:44:39.287267       1 event.go:294] \"Event occurred\" object=\"services-4316/affinity-nodeport-timeout\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-4316/affinity-nodeport-timeout: Operation cannot be fulfilled on endpoints \\\"affinity-nodeport-timeout\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0902 13:44:39.631969       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-9084/test-w2sm6\" objectUID=43f70be1-7889-448b-b7ca-24a209fb347b kind=\"EndpointSlice\" virtual=false\nI0902 13:44:39.665006       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-9084/test-w2sm6\" objectUID=43f70be1-7889-448b-b7ca-24a209fb347b kind=\"EndpointSlice\" propagationPolicy=Background\nE0902 13:44:39.703819       1 tokens_controller.go:262] error synchronizing serviceaccount endpointslicemirroring-5556/default: secrets \"default-token-wqfdz\" is forbidden: unable to create new content in namespace endpointslicemirroring-5556 because it is being terminated\nI0902 13:44:39.859726       1 expand_controller.go:289] Ignoring the PVC \"volume-expand-4507/csi-hostpathhm69w\" (uid: \"970c72d6-b7d4-45f3-9641-9b0aa01b1603\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0902 13:44:39.860017       1 event.go:294] \"Event occurred\" object=\"volume-expand-4507/csi-hostpathhm69w\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nE0902 13:44:39.970267       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-9084/default: secrets \"default-token-cngdd\" is forbidden: unable to create new content in namespace statefulset-9084 because it is being terminated\nI0902 13:44:40.120420       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-9188/service-headless\" need=3 creating=1\nI0902 13:44:40.155744       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-9188/service-headless-toggled\" need=3 creating=1\nI0902 13:44:40.257666       1 graph_builder.go:587] add [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:44:40.258076       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-v2ph9\" objectUID=a4262dc5-e281-47cb-9e96-cc45db086775 kind=\"Pod\" virtual=false\nI0902 13:44:40.258126       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-kq2cp\" objectUID=1812fb67-4f7c-4d0e-94c8-63a674ed78f8 kind=\"Pod\" virtual=false\nI0902 13:44:40.258140       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-hkzkt\" objectUID=e50b108d-1d6c-4b91-ad15-9e12f39d4ff9 kind=\"Pod\" virtual=false\nI0902 13:44:40.258151       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-bjb8l\" objectUID=180db872-7745-4cca-a1c5-27a03f24ff53 kind=\"Pod\" virtual=false\nI0902 13:44:40.258168       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-bw6q9\" objectUID=bfed466c-d105-4889-ab98-04aed043ea97 kind=\"Pod\" virtual=false\nI0902 13:44:40.258206       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-z8bjh\" objectUID=3ae88356-4b4b-44b2-b713-f7216390aebe kind=\"Pod\" virtual=false\nI0902 13:44:40.258217       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-k6njn\" objectUID=ed415907-7db0-491a-8e6d-964c33d61885 kind=\"Pod\" virtual=false\nI0902 13:44:40.258228       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-jcgws\" objectUID=8067ccb6-bb67-4c1e-8bb1-651ea2930b8a kind=\"Pod\" virtual=false\nI0902 13:44:40.258239       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-dm5c2\" objectUID=89d0f10b-5eac-4a51-9499-add7f3313be0 kind=\"Pod\" virtual=false\nI0902 13:44:40.258250       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted\" objectUID=a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87 kind=\"ReplicationController\" virtual=false\nI0902 13:44:40.258257       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-98wvt\" objectUID=ca2967ae-a1c4-4c72-8234-5389ce761ad9 kind=\"Pod\" virtual=false\nE0902 13:44:40.262087       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:40.304741       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-k6njn, uid: ed415907-7db0-491a-8e6d-964c33d61885] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.304893       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-jcgws, uid: 8067ccb6-bb67-4c1e-8bb1-651ea2930b8a] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.304978       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-dm5c2, uid: 89d0f10b-5eac-4a51-9499-add7f3313be0] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.305054       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-z8bjh, uid: 3ae88356-4b4b-44b2-b713-f7216390aebe] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.305121       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-bjb8l, uid: 180db872-7745-4cca-a1c5-27a03f24ff53] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.305215       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-bw6q9, uid: bfed466c-d105-4889-ab98-04aed043ea97] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.305310       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-98wvt, uid: ca2967ae-a1c4-4c72-8234-5389ce761ad9] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.305376       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-v2ph9, uid: a4262dc5-e281-47cb-9e96-cc45db086775] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.305442       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-kq2cp, uid: 1812fb67-4f7c-4d0e-94c8-63a674ed78f8] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.305508       1 garbagecollector.go:595] adding [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-hkzkt, uid: e50b108d-1d6c-4b91-ad15-9e12f39d4ff9] to attemptToDelete, because its owner [v1/ReplicationController, namespace: gc-5560, name: simpletest-rc-to-be-deleted, uid: a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87] is deletingDependents\nI0902 13:44:40.338751       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-k6njn, uid: ed415907-7db0-491a-8e6d-964c33d61885] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:44:40.338709       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-v2ph9, uid: a4262dc5-e281-47cb-9e96-cc45db086775] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:44:40.339023       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-kq2cp, uid: 1812fb67-4f7c-4d0e-94c8-63a674ed78f8] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:44:40.339207       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-z8bjh, uid: 3ae88356-4b4b-44b2-b713-f7216390aebe] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nI0902 13:44:40.339253       1 garbagecollector.go:556] at least one owner of object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-jcgws, uid: 8067ccb6-bb67-4c1e-8bb1-651ea2930b8a] has FinalizerDeletingDependents, and the object itself has dependents, so it is going to be deleted in Foreground\nE0902 13:44:40.361613       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0902 13:44:40.366066       1 garbagecollector.go:522] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest-rc-to-be-deleted-dm5c2\", UID:\"89d0f10b-5eac-4a51-9499-add7f3313be0\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-5560\"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"ReplicationController\", Name:\"simpletest-rc-to-stay\", UID:\"a11a80eb-8d92-4169-a10c-12d81b3dc8c9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect\nI0902 13:44:40.366309       1 garbagecollector.go:526] remove dangling references []v1.OwnerReference(nil) and waiting references []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"ReplicationController\", Name:\"simpletest-rc-to-be-deleted\", UID:\"a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87\", Controller:(*bool)(0x4002202b5a), BlockOwnerDeletion:(*bool)(0x4002202b5b)}} for object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-dm5c2, uid: 89d0f10b-5eac-4a51-9499-add7f3313be0]\nI0902 13:44:40.366718       1 garbagecollector.go:522] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest-rc-to-be-deleted-98wvt\", UID:\"ca2967ae-a1c4-4c72-8234-5389ce761ad9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-5560\"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"ReplicationController\", Name:\"simpletest-rc-to-stay\", UID:\"a11a80eb-8d92-4169-a10c-12d81b3dc8c9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect\nI0902 13:44:40.367332       1 garbagecollector.go:526] remove dangling references []v1.OwnerReference(nil) and waiting references []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"ReplicationController\", Name:\"simpletest-rc-to-be-deleted\", UID:\"a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87\", Controller:(*bool)(0x4002202eba), BlockOwnerDeletion:(*bool)(0x4002202ebb)}} for object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-98wvt, uid: ca2967ae-a1c4-4c72-8234-5389ce761ad9]\nI0902 13:44:40.366884       1 garbagecollector.go:522] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest-rc-to-be-deleted-bw6q9\", UID:\"bfed466c-d105-4889-ab98-04aed043ea97\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-5560\"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"ReplicationController\", Name:\"simpletest-rc-to-stay\", UID:\"a11a80eb-8d92-4169-a10c-12d81b3dc8c9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect\nI0902 13:44:40.367826       1 graph_builder.go:587] add [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-jcgws, uid: 8067ccb6-bb67-4c1e-8bb1-651ea2930b8a] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0902 13:44:40.367949       1 garbagecollector.go:471] \"Processing object\" object=\"gc-5560/simpletest-rc-to-be-deleted-jcgws\" objectUID=7ce6ab15-4ebb-44d9-8abf-7da55fb851af kind=\"CiliumEndpoint\" virtual=false\nI0902 13:44:40.367627       1 garbagecollector.go:526] remove dangling references []v1.OwnerReference(nil) and waiting references []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"ReplicationController\", Name:\"simpletest-rc-to-be-deleted\", UID:\"a0cd1e5d-8e55-4a17-8874-3ee5f15d0e87\", Controller:(*bool)(0x4003741fea), BlockOwnerDeletion:(*bool)(0x4003741feb)}} for object [v1/Pod, namespace: gc-5560, name: simpletest-rc-to-be-deleted-bw6q9, uid: bfed466c-d105-4889-ab98-04aed043ea97]\nI0902 13:44:40.367203       1 garbagecollector.go:522] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest-rc-to-be-deleted-hkzkt\", UID:\"e50b108d-1d6c-4b91-ad15-9e12f39d4ff9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-5560\"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"ReplicationController\", Name:\"simpletest-rc-to-stay\", UID:\"a11a80eb-8d92-4169-a10c-12d81b3dc8c9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect\nI0902 13:44:40.367166       1 garbagecollector.go:522] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"simpletest-rc-to-be-deleted-bjb8l\", UID:\"180db872-7745-4cca-a1c5-27a03f24ff53\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"gc-5560\"} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", K