This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-19 13:26
Elapsed31m41s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0919 13:27:20.056030    4093 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0919 13:27:20.057675    4093 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.23.0-alpha.2+v1.23.0-alpha.1-184-g8ab1f8bbc4/linux/amd64/kops
I0919 13:27:20.803013    4093 up.go:43] Cleaning up any leaked resources from previous cluster
I0919 13:27:20.803042    4093 dumplogs.go:38] /logs/artifacts/1fcacd1e-194d-11ec-bfa0-9e8ae4027703/kops toolbox dump --name e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0919 13:27:20.822463    4112 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0919 13:27:20.822951    4112 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io" not found
W0919 13:27:21.293268    4093 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0919 13:27:21.293318    4093 down.go:48] /logs/artifacts/1fcacd1e-194d-11ec-bfa0-9e8ae4027703/kops delete cluster --name e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --yes
I0919 13:27:21.313209    4122 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0919 13:27:21.313334    4122 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io" not found
I0919 13:27:21.832974    4093 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/19 13:27:21 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0919 13:27:21.840387    4093 http.go:37] curl https://ip.jsb.workers.dev
I0919 13:27:21.932005    4093 up.go:144] /logs/artifacts/1fcacd1e-194d-11ec-bfa0-9e8ae4027703/kops create cluster --name e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.0-alpha.2 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-arm64-server-20210907 --channel=alpha --networking=cilium --container-runtime=containerd --zones=eu-central-1a --node-size=m6g.large --master-size=m6g.large --admin-access 35.202.252.144/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48
I0919 13:27:21.951039    4133 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0919 13:27:21.952014    4133 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0919 13:27:21.975594    4133 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0919 13:27:22.496598    4133 new_cluster.go:1052]  Cloud Provider ID = aws
... skipping 31 lines ...

I0919 13:27:47.411052    4093 up.go:181] /logs/artifacts/1fcacd1e-194d-11ec-bfa0-9e8ae4027703/kops validate cluster --name e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0919 13:27:47.430568    4153 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0919 13:27:47.430682    4153 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io

W0919 13:27:48.789740    4153 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
W0919 13:27:58.837659    4153 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:28:08.870987    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:28:18.900044    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:28:28.964661    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:28:39.361567    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:28:49.404777    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:28:59.457710    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:29:09.503139    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:29:19.551138    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:29:29.586261    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:29:39.915532    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:29:49.952305    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:29:59.989614    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:30:10.022822    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:30:20.054508    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:30:30.090422    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:30:40.138906    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:30:50.171235    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:31:00.331868    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:31:10.365891    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0919 13:31:20.437589    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 12 lines ...
Pod	kube-system/coredns-5dc785954d-gvqt2		system-cluster-critical pod "coredns-5dc785954d-gvqt2" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-nqsxp	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-nqsxp" is pending
Pod	kube-system/ebs-csi-controller-7498c865fb-snbqf	system-cluster-critical pod "ebs-csi-controller-7498c865fb-snbqf" is pending
Pod	kube-system/ebs-csi-node-flxc2			system-node-critical pod "ebs-csi-node-flxc2" is pending
Pod	kube-system/ebs-csi-node-gqkht			system-node-critical pod "ebs-csi-node-gqkht" is pending

Validation Failed
W0919 13:31:42.893299    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 16 lines ...
Pod	kube-system/coredns-autoscaler-84d4cfd89c-nqsxp	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-nqsxp" is pending
Pod	kube-system/ebs-csi-controller-7498c865fb-snbqf	system-cluster-critical pod "ebs-csi-controller-7498c865fb-snbqf" is pending
Pod	kube-system/ebs-csi-node-flxc2			system-node-critical pod "ebs-csi-node-flxc2" is pending
Pod	kube-system/ebs-csi-node-gqkht			system-node-critical pod "ebs-csi-node-gqkht" is pending
Pod	kube-system/ebs-csi-node-vx5gj			system-node-critical pod "ebs-csi-node-vx5gj" is pending

Validation Failed
W0919 13:31:55.089779    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 21 lines ...
Pod	kube-system/ebs-csi-node-88hlw			system-node-critical pod "ebs-csi-node-88hlw" is pending
Pod	kube-system/ebs-csi-node-flxc2			system-node-critical pod "ebs-csi-node-flxc2" is pending
Pod	kube-system/ebs-csi-node-gqkht			system-node-critical pod "ebs-csi-node-gqkht" is pending
Pod	kube-system/ebs-csi-node-l8skv			system-node-critical pod "ebs-csi-node-l8skv" is pending
Pod	kube-system/ebs-csi-node-vx5gj			system-node-critical pod "ebs-csi-node-vx5gj" is pending

Validation Failed
W0919 13:32:07.080289    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 17 lines ...
Pod	kube-system/ebs-csi-node-88hlw			system-node-critical pod "ebs-csi-node-88hlw" is pending
Pod	kube-system/ebs-csi-node-flxc2			system-node-critical pod "ebs-csi-node-flxc2" is pending
Pod	kube-system/ebs-csi-node-gqkht			system-node-critical pod "ebs-csi-node-gqkht" is pending
Pod	kube-system/ebs-csi-node-l8skv			system-node-critical pod "ebs-csi-node-l8skv" is pending
Pod	kube-system/ebs-csi-node-vx5gj			system-node-critical pod "ebs-csi-node-vx5gj" is pending

Validation Failed
W0919 13:32:19.079960    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 8 lines ...
VALIDATION ERRORS
KIND	NAME					MESSAGE
Pod	kube-system/coredns-5dc785954d-hhncb	system-cluster-critical pod "coredns-5dc785954d-hhncb" is pending
Pod	kube-system/ebs-csi-node-88hlw		system-node-critical pod "ebs-csi-node-88hlw" is pending
Pod	kube-system/ebs-csi-node-l8skv		system-node-critical pod "ebs-csi-node-l8skv" is pending

Validation Failed
W0919 13:32:31.039317    4153 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 230 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 794 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 166 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:11.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9892" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:12.118: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:13.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-7945" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:13.501: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 193 lines ...
• [SLOW TEST:12.262 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:23.343: INFO: Only supported for providers [azure] (not aws)
... skipping 121 lines ...
• [SLOW TEST:15.336 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:918
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 42 lines ...
• [SLOW TEST:16.549 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Sep 19 13:35:11.528: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-f0b7d1f1-ce3c-450d-b5e0-293471962b40
STEP: Creating a pod to test consume configMaps
Sep 19 13:35:11.965: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684" in namespace "projected-7888" to be "Succeeded or Failed"
Sep 19 13:35:12.074: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Pending", Reason="", readiness=false. Elapsed: 108.572598ms
Sep 19 13:35:14.183: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2171132s
Sep 19 13:35:16.293: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327271312s
Sep 19 13:35:18.401: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435720349s
Sep 19 13:35:20.510: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544971404s
Sep 19 13:35:22.619: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Pending", Reason="", readiness=false. Elapsed: 10.653106296s
Sep 19 13:35:24.727: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Pending", Reason="", readiness=false. Elapsed: 12.761770496s
Sep 19 13:35:26.836: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.870439303s
STEP: Saw pod success
Sep 19 13:35:26.836: INFO: Pod "pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684" satisfied condition "Succeeded or Failed"
Sep 19 13:35:26.944: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:35:27.193: INFO: Waiting for pod pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684 to disappear
Sep 19 13:35:27.301: INFO: Pod pod-projected-configmaps-5a0bb4ac-e943-471e-afc7-d8fbfc79a684 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.547 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:202
STEP: Creating a pod with an ignorelisted, but not allowlisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:30.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3639" for this suite.

... skipping 11 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:25.474 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":1,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 67 lines ...
Sep 19 13:35:11.824: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Sep 19 13:35:12.153: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d" in namespace "security-context-test-2195" to be "Succeeded or Failed"
Sep 19 13:35:12.267: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 113.564492ms
Sep 19 13:35:14.375: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22246432s
Sep 19 13:35:16.484: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331503999s
Sep 19 13:35:18.598: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444670914s
Sep 19 13:35:20.706: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553420047s
Sep 19 13:35:22.821: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668263057s
Sep 19 13:35:24.931: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.777597502s
Sep 19 13:35:27.041: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.887561859s
Sep 19 13:35:29.150: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.996908875s
Sep 19 13:35:31.260: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.106863765s
Sep 19 13:35:33.369: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.216365609s
Sep 19 13:35:35.478: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d": Phase="Failed", Reason="", readiness=false. Elapsed: 23.325329587s
Sep 19 13:35:35.478: INFO: Pod "busybox-readonly-true-6f75a8bd-deb0-46fd-b7dc-8b02e78be83d" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:35.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2195" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:35.879: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 161 lines ...
W0919 13:35:11.924375    4832 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 19 13:35:11.924: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep 19 13:35:12.257: INFO: Waiting up to 5m0s for pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f" in namespace "security-context-5316" to be "Succeeded or Failed"
Sep 19 13:35:12.370: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 113.280966ms
Sep 19 13:35:14.480: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222894743s
Sep 19 13:35:16.592: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335575996s
Sep 19 13:35:18.708: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.45066353s
Sep 19 13:35:20.817: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.560388695s
Sep 19 13:35:22.928: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.67074498s
Sep 19 13:35:25.038: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.781285542s
Sep 19 13:35:27.149: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.892172096s
Sep 19 13:35:29.260: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.002626761s
Sep 19 13:35:31.369: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.112299924s
Sep 19 13:35:33.479: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.222109194s
Sep 19 13:35:35.589: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.332302481s
STEP: Saw pod success
Sep 19 13:35:35.589: INFO: Pod "security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f" satisfied condition "Succeeded or Failed"
Sep 19 13:35:35.698: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f container test-container: <nil>
STEP: delete the pod
Sep 19 13:35:36.005: INFO: Waiting for pod security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f to disappear
Sep 19 13:35:36.114: INFO: Pod security-context-52be1bb2-8847-43d3-a0fe-5aee0933cb7f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:26.354 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:36.471: INFO: Only supported for providers [gce gke] (not aws)
... skipping 219 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-620e16a3-cab1-458c-86df-b0cc1ce31186
STEP: Creating a pod to test consume configMaps
Sep 19 13:35:36.520: INFO: Waiting up to 5m0s for pod "pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df" in namespace "configmap-420" to be "Succeeded or Failed"
Sep 19 13:35:36.631: INFO: Pod "pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df": Phase="Pending", Reason="", readiness=false. Elapsed: 111.775399ms
Sep 19 13:35:38.743: INFO: Pod "pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22298831s
Sep 19 13:35:40.853: INFO: Pod "pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33345756s
Sep 19 13:35:42.964: INFO: Pod "pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.444132709s
STEP: Saw pod success
Sep 19 13:35:42.964: INFO: Pod "pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df" satisfied condition "Succeeded or Failed"
Sep 19 13:35:43.074: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:35:43.301: INFO: Waiting for pod pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df to disappear
Sep 19 13:35:43.412: INFO: Pod pod-configmaps-b4dbc6a6-9ff1-405f-9719-1f5d8d3028df no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.937 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:43.659: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
Sep 19 13:35:23.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 19 13:35:24.040: INFO: Waiting up to 5m0s for pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1" in namespace "emptydir-6700" to be "Succeeded or Failed"
Sep 19 13:35:24.149: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 108.845017ms
Sep 19 13:35:26.258: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218296031s
Sep 19 13:35:28.369: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328499109s
Sep 19 13:35:30.478: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438219626s
Sep 19 13:35:32.588: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548257824s
Sep 19 13:35:34.701: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660554895s
Sep 19 13:35:36.810: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.769440563s
Sep 19 13:35:38.919: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.878955523s
Sep 19 13:35:41.029: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.989070883s
Sep 19 13:35:43.139: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.098889613s
Sep 19 13:35:45.248: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.208003865s
STEP: Saw pod success
Sep 19 13:35:45.248: INFO: Pod "pod-2502717f-2ba7-4f22-9147-c76ccd045fa1" satisfied condition "Succeeded or Failed"
Sep 19 13:35:45.358: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-2502717f-2ba7-4f22-9147-c76ccd045fa1 container test-container: <nil>
STEP: delete the pod
Sep 19 13:35:45.974: INFO: Waiting for pod pod-2502717f-2ba7-4f22-9147-c76ccd045fa1 to disappear
Sep 19 13:35:46.083: INFO: Pod pod-2502717f-2ba7-4f22-9147-c76ccd045fa1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:22.919 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:46.345: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":2,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:47.343: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
Sep 19 13:35:35.827: INFO: PersistentVolumeClaim pvc-kknl9 found but phase is Pending instead of Bound.
Sep 19 13:35:37.937: INFO: PersistentVolumeClaim pvc-kknl9 found and phase=Bound (8.566136746s)
Sep 19 13:35:37.937: INFO: Waiting up to 3m0s for PersistentVolume local-qt4w5 to have phase Bound
Sep 19 13:35:38.046: INFO: PersistentVolume local-qt4w5 found and phase=Bound (109.3511ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rgr9
STEP: Creating a pod to test subpath
Sep 19 13:35:38.379: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rgr9" in namespace "provisioning-7519" to be "Succeeded or Failed"
Sep 19 13:35:38.489: INFO: Pod "pod-subpath-test-preprovisionedpv-rgr9": Phase="Pending", Reason="", readiness=false. Elapsed: 110.752113ms
Sep 19 13:35:40.600: INFO: Pod "pod-subpath-test-preprovisionedpv-rgr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221489227s
Sep 19 13:35:42.711: INFO: Pod "pod-subpath-test-preprovisionedpv-rgr9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3320684s
Sep 19 13:35:44.821: INFO: Pod "pod-subpath-test-preprovisionedpv-rgr9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44233428s
Sep 19 13:35:46.932: INFO: Pod "pod-subpath-test-preprovisionedpv-rgr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553434502s
STEP: Saw pod success
Sep 19 13:35:46.932: INFO: Pod "pod-subpath-test-preprovisionedpv-rgr9" satisfied condition "Succeeded or Failed"
Sep 19 13:35:47.042: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-rgr9 container test-container-subpath-preprovisionedpv-rgr9: <nil>
STEP: delete the pod
Sep 19 13:35:47.279: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rgr9 to disappear
Sep 19 13:35:47.389: INFO: Pod pod-subpath-test-preprovisionedpv-rgr9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rgr9
Sep 19 13:35:47.389: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rgr9" in namespace "provisioning-7519"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:49.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5756" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":3,"skipped":64,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:49.784: INFO: Driver "local" does not provide raw block - skipping
... skipping 70 lines ...
• [SLOW TEST:39.973 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:49.979: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:49.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 19 13:35:52.211: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:52.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6454" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:52.667: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 130 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:53.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8426" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":4,"skipped":73,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:53.826: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 59 lines ...
• [SLOW TEST:28.018 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:54.586: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:44.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Sep 19 13:35:45.156: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2450" to be "Succeeded or Failed"
Sep 19 13:35:45.265: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 109.273945ms
Sep 19 13:35:47.375: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219587243s
Sep 19 13:35:49.486: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329944448s
Sep 19 13:35:51.597: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440685003s
Sep 19 13:35:53.707: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.550836477s
STEP: Saw pod success
Sep 19 13:35:53.707: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 19 13:35:53.817: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep 19 13:35:54.421: INFO: Waiting for pod pod-host-path-test to disappear
Sep 19 13:35:54.530: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.260 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":3,"skipped":12,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:30.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 19 13:35:31.311: INFO: Waiting up to 5m0s for pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7" in namespace "emptydir-4246" to be "Succeeded or Failed"
Sep 19 13:35:31.419: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 108.266745ms
Sep 19 13:35:33.528: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217075653s
Sep 19 13:35:35.638: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326862249s
Sep 19 13:35:37.746: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435447171s
Sep 19 13:35:39.856: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544706372s
Sep 19 13:35:41.966: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655282262s
Sep 19 13:35:44.075: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76419285s
Sep 19 13:35:46.185: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.873653047s
Sep 19 13:35:48.293: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.982002539s
Sep 19 13:35:50.402: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.091297535s
Sep 19 13:35:52.511: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.20035548s
Sep 19 13:35:54.621: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.309535421s
STEP: Saw pod success
Sep 19 13:35:54.621: INFO: Pod "pod-1248bfb5-f555-419d-b08b-68f457b070d7" satisfied condition "Succeeded or Failed"
Sep 19 13:35:54.730: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-1248bfb5-f555-419d-b08b-68f457b070d7 container test-container: <nil>
STEP: delete the pod
Sep 19 13:35:54.958: INFO: Waiting for pod pod-1248bfb5-f555-419d-b08b-68f457b070d7 to disappear
Sep 19 13:35:55.068: INFO: Pod pod-1248bfb5-f555-419d-b08b-68f457b070d7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:55.297: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:35:55.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7283" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:55.638: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 72 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Sep 19 13:35:44.221: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 19 13:35:44.221: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9dhx
STEP: Creating a pod to test subpath
Sep 19 13:35:44.334: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9dhx" in namespace "provisioning-4737" to be "Succeeded or Failed"
Sep 19 13:35:44.445: INFO: Pod "pod-subpath-test-inlinevolume-9dhx": Phase="Pending", Reason="", readiness=false. Elapsed: 110.541953ms
Sep 19 13:35:46.556: INFO: Pod "pod-subpath-test-inlinevolume-9dhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221882895s
Sep 19 13:35:48.668: INFO: Pod "pod-subpath-test-inlinevolume-9dhx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333766812s
Sep 19 13:35:50.778: INFO: Pod "pod-subpath-test-inlinevolume-9dhx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44431849s
Sep 19 13:35:52.889: INFO: Pod "pod-subpath-test-inlinevolume-9dhx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555181618s
Sep 19 13:35:55.002: INFO: Pod "pod-subpath-test-inlinevolume-9dhx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.66842097s
STEP: Saw pod success
Sep 19 13:35:55.003: INFO: Pod "pod-subpath-test-inlinevolume-9dhx" satisfied condition "Succeeded or Failed"
Sep 19 13:35:55.113: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-9dhx container test-container-subpath-inlinevolume-9dhx: <nil>
STEP: delete the pod
Sep 19 13:35:55.340: INFO: Waiting for pod pod-subpath-test-inlinevolume-9dhx to disappear
Sep 19 13:35:55.450: INFO: Pod pod-subpath-test-inlinevolume-9dhx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9dhx
Sep 19 13:35:55.450: INFO: Deleting pod "pod-subpath-test-inlinevolume-9dhx" in namespace "provisioning-4737"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:35:55.914: INFO: Driver aws doesn't publish storage capacity -- skipping
... skipping 131 lines ...
Sep 19 13:35:22.483: INFO: PersistentVolumeClaim pvc-4c7t2 found but phase is Pending instead of Bound.
Sep 19 13:35:24.597: INFO: PersistentVolumeClaim pvc-4c7t2 found and phase=Bound (4.336437333s)
Sep 19 13:35:24.597: INFO: Waiting up to 3m0s for PersistentVolume local-5csjs to have phase Bound
Sep 19 13:35:24.707: INFO: PersistentVolume local-5csjs found and phase=Bound (110.352647ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fvlr
STEP: Creating a pod to test subpath
Sep 19 13:35:25.048: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fvlr" in namespace "provisioning-7761" to be "Succeeded or Failed"
Sep 19 13:35:25.159: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 110.806654ms
Sep 19 13:35:27.272: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223999043s
Sep 19 13:35:29.384: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335622909s
Sep 19 13:35:31.497: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448669377s
Sep 19 13:35:33.608: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559587886s
Sep 19 13:35:35.721: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.672920287s
... skipping 3 lines ...
Sep 19 13:35:44.170: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 19.121168382s
Sep 19 13:35:46.282: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 21.233093631s
Sep 19 13:35:48.397: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 23.348725197s
Sep 19 13:35:50.508: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 25.459499333s
Sep 19 13:35:52.619: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.570734732s
STEP: Saw pod success
Sep 19 13:35:52.619: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr" satisfied condition "Succeeded or Failed"
Sep 19 13:35:52.730: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fvlr container test-container-subpath-preprovisionedpv-fvlr: <nil>
STEP: delete the pod
Sep 19 13:35:52.957: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fvlr to disappear
Sep 19 13:35:53.068: INFO: Pod pod-subpath-test-preprovisionedpv-fvlr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fvlr
Sep 19 13:35:53.068: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fvlr" in namespace "provisioning-7761"
STEP: Creating pod pod-subpath-test-preprovisionedpv-fvlr
STEP: Creating a pod to test subpath
Sep 19 13:35:53.291: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fvlr" in namespace "provisioning-7761" to be "Succeeded or Failed"
Sep 19 13:35:53.402: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 110.572267ms
Sep 19 13:35:55.514: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222103723s
Sep 19 13:35:57.624: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332887167s
STEP: Saw pod success
Sep 19 13:35:57.624: INFO: Pod "pod-subpath-test-preprovisionedpv-fvlr" satisfied condition "Succeeded or Failed"
Sep 19 13:35:57.738: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fvlr container test-container-subpath-preprovisionedpv-fvlr: <nil>
STEP: delete the pod
Sep 19 13:35:57.980: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fvlr to disappear
Sep 19 13:35:58.091: INFO: Pod pod-subpath-test-preprovisionedpv-fvlr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fvlr
Sep 19 13:35:58.091: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fvlr" in namespace "provisioning-7761"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:09.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
W0919 13:35:10.421302    4728 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 19 13:35:10.421: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 19 13:35:10.635: INFO: PodSpec: initContainers in spec.initContainers
Sep 19 13:36:00.452: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-42e63580-3913-41a6-8380-9f2a067f57c9", GenerateName:"", Namespace:"init-container-7826", SelfLink:"", UID:"6253f8e7-b539-4fa4-a8d4-da32440c82b1", ResourceVersion:"3759", Generation:0, CreationTimestamp:time.Date(2021, time.September, 19, 13, 35, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"635947613"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 19, 13, 35, 10, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041bd1b8), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 19, 13, 35, 17, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0041bd1e8), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-rjbvw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc003cca720), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-rjbvw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-rjbvw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-rjbvw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003f0c278), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-48-58.eu-central-1.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011552d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f0c2f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003f0c310)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003f0c318), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003f0c31c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003cc4ed0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 19, 13, 35, 10, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 19, 13, 35, 10, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 19, 13, 35, 10, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 19, 13, 35, 10, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.48.58", PodIP:"100.96.2.66", PodIPs:[]v1.PodIP{v1.PodIP{IP:"100.96.2.66"}}, StartTime:time.Date(2021, time.September, 19, 13, 35, 10, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001155420)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001155490)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://d0aa5433746a326ae97e462cdcb29588de1365e1168643b44a93f8269f7cb29c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003cca7a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003cca780), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc003f0c39f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:36:00.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7826" for this suite.


• [SLOW TEST:50.799 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Sep 19 13:35:21.565: INFO: PersistentVolumeClaim pvc-4xlvr found but phase is Pending instead of Bound.
Sep 19 13:35:23.676: INFO: PersistentVolumeClaim pvc-4xlvr found and phase=Bound (2.21950104s)
Sep 19 13:35:23.676: INFO: Waiting up to 3m0s for PersistentVolume local-wfn4z to have phase Bound
Sep 19 13:35:23.789: INFO: PersistentVolume local-wfn4z found and phase=Bound (112.715049ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jz27
STEP: Creating a pod to test subpath
Sep 19 13:35:24.118: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jz27" in namespace "provisioning-7966" to be "Succeeded or Failed"
Sep 19 13:35:24.227: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 109.264393ms
Sep 19 13:35:26.337: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219483767s
Sep 19 13:35:28.448: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330151752s
Sep 19 13:35:30.562: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444320533s
Sep 19 13:35:32.671: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553531024s
Sep 19 13:35:34.781: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663707011s
... skipping 3 lines ...
Sep 19 13:35:43.223: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 19.104895266s
Sep 19 13:35:45.333: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 21.215465477s
Sep 19 13:35:47.443: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 23.325553026s
Sep 19 13:35:49.554: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 25.435842992s
Sep 19 13:35:51.663: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.545171539s
STEP: Saw pod success
Sep 19 13:35:51.663: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27" satisfied condition "Succeeded or Failed"
Sep 19 13:35:51.772: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-jz27 container test-container-subpath-preprovisionedpv-jz27: <nil>
STEP: delete the pod
Sep 19 13:35:52.000: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jz27 to disappear
Sep 19 13:35:52.111: INFO: Pod pod-subpath-test-preprovisionedpv-jz27 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jz27
Sep 19 13:35:52.111: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jz27" in namespace "provisioning-7966"
STEP: Creating pod pod-subpath-test-preprovisionedpv-jz27
STEP: Creating a pod to test subpath
Sep 19 13:35:52.331: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jz27" in namespace "provisioning-7966" to be "Succeeded or Failed"
Sep 19 13:35:52.440: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 108.593612ms
Sep 19 13:35:54.549: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217719908s
Sep 19 13:35:56.660: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328360118s
STEP: Saw pod success
Sep 19 13:35:56.660: INFO: Pod "pod-subpath-test-preprovisionedpv-jz27" satisfied condition "Succeeded or Failed"
Sep 19 13:35:56.768: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-jz27 container test-container-subpath-preprovisionedpv-jz27: <nil>
STEP: delete the pod
Sep 19 13:35:57.023: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jz27 to disappear
Sep 19 13:35:57.134: INFO: Pod pod-subpath-test-preprovisionedpv-jz27 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jz27
Sep 19 13:35:57.134: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jz27" in namespace "provisioning-7966"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:00.959: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 207 lines ...
• [SLOW TEST:13.044 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 75 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:02.035: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 76 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 8 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 19 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:35.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-bf632d50-8a4e-40d9-85e8-5c061e3ef54d
STEP: Creating a pod to test consume configMaps
Sep 19 13:35:36.620: INFO: Waiting up to 5m0s for pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288" in namespace "configmap-7989" to be "Succeeded or Failed"
Sep 19 13:35:36.729: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 108.977189ms
Sep 19 13:35:38.838: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218108611s
Sep 19 13:35:40.947: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327456358s
Sep 19 13:35:43.058: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438130805s
Sep 19 13:35:45.168: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548757122s
Sep 19 13:35:47.280: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660066205s
... skipping 3 lines ...
Sep 19 13:35:55.720: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 19.099784599s
Sep 19 13:35:57.830: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 21.209984594s
Sep 19 13:35:59.945: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 23.324912575s
Sep 19 13:36:02.056: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Pending", Reason="", readiness=false. Elapsed: 25.436505682s
Sep 19 13:36:04.170: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.550653582s
STEP: Saw pod success
Sep 19 13:36:04.170: INFO: Pod "pod-configmaps-2097a205-568a-426d-a681-f1804c79d288" satisfied condition "Succeeded or Failed"
Sep 19 13:36:04.283: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-configmaps-2097a205-568a-426d-a681-f1804c79d288 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:36:04.509: INFO: Waiting for pod pod-configmaps-2097a205-568a-426d-a681-f1804c79d288 to disappear
Sep 19 13:36:04.618: INFO: Pod pod-configmaps-2097a205-568a-426d-a681-f1804c79d288 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:29.011 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:36:04.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-9225" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":4,"skipped":55,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:36.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":6,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:05.804: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-939c8655-a943-433c-b694-9b2a0debdf35
STEP: Creating a pod to test consume configMaps
Sep 19 13:35:54.606: INFO: Waiting up to 5m0s for pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d" in namespace "configmap-2813" to be "Succeeded or Failed"
Sep 19 13:35:54.714: INFO: Pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d": Phase="Pending", Reason="", readiness=false. Elapsed: 107.945026ms
Sep 19 13:35:56.824: INFO: Pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217452045s
Sep 19 13:35:58.933: INFO: Pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327136947s
Sep 19 13:36:01.046: INFO: Pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439408566s
Sep 19 13:36:03.155: INFO: Pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548698026s
Sep 19 13:36:05.265: INFO: Pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.658423951s
STEP: Saw pod success
Sep 19 13:36:05.265: INFO: Pod "pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d" satisfied condition "Succeeded or Failed"
Sep 19 13:36:05.373: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:36:05.595: INFO: Waiting for pod pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d to disappear
Sep 19 13:36:05.704: INFO: Pod pod-configmaps-560c9bf4-bd70-4aba-a031-71ef9050406d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:06.650: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
• [SLOW TEST:14.263 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep 19 13:36:07.249: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 19 13:36:07.249: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-p7z5
STEP: Creating a pod to test subpath
Sep 19 13:36:07.364: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-p7z5" in namespace "provisioning-851" to be "Succeeded or Failed"
Sep 19 13:36:07.474: INFO: Pod "pod-subpath-test-inlinevolume-p7z5": Phase="Pending", Reason="", readiness=false. Elapsed: 110.185538ms
Sep 19 13:36:09.586: INFO: Pod "pod-subpath-test-inlinevolume-p7z5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222001326s
STEP: Saw pod success
Sep 19 13:36:09.586: INFO: Pod "pod-subpath-test-inlinevolume-p7z5" satisfied condition "Succeeded or Failed"
Sep 19 13:36:09.696: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-p7z5 container test-container-volume-inlinevolume-p7z5: <nil>
STEP: delete the pod
Sep 19 13:36:09.937: INFO: Waiting for pod pod-subpath-test-inlinevolume-p7z5 to disappear
Sep 19 13:36:10.047: INFO: Pod pod-subpath-test-inlinevolume-p7z5 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-p7z5
Sep 19 13:36:10.047: INFO: Deleting pod "pod-subpath-test-inlinevolume-p7z5" in namespace "provisioning-851"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:36:10.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-851" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Sep 19 13:35:36.671: INFO: PersistentVolumeClaim pvc-x8v7d found but phase is Pending instead of Bound.
Sep 19 13:35:38.780: INFO: PersistentVolumeClaim pvc-x8v7d found and phase=Bound (4.33795663s)
Sep 19 13:35:38.780: INFO: Waiting up to 3m0s for PersistentVolume local-sqm52 to have phase Bound
Sep 19 13:35:38.889: INFO: PersistentVolume local-sqm52 found and phase=Bound (108.593177ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lmnw
STEP: Creating a pod to test atomic-volume-subpath
Sep 19 13:35:39.219: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lmnw" in namespace "provisioning-296" to be "Succeeded or Failed"
Sep 19 13:35:39.328: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Pending", Reason="", readiness=false. Elapsed: 109.154579ms
Sep 19 13:35:41.438: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218628984s
Sep 19 13:35:43.547: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328137171s
Sep 19 13:35:45.658: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438618258s
Sep 19 13:35:47.767: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548223801s
Sep 19 13:35:49.876: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Running", Reason="", readiness=true. Elapsed: 10.657186441s
... skipping 5 lines ...
Sep 19 13:36:02.538: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Running", Reason="", readiness=true. Elapsed: 23.31892239s
Sep 19 13:36:04.650: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Running", Reason="", readiness=true. Elapsed: 25.430780433s
Sep 19 13:36:06.760: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Running", Reason="", readiness=true. Elapsed: 27.540437184s
Sep 19 13:36:08.869: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Running", Reason="", readiness=true. Elapsed: 29.650078278s
Sep 19 13:36:10.980: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.760936583s
STEP: Saw pod success
Sep 19 13:36:10.980: INFO: Pod "pod-subpath-test-preprovisionedpv-lmnw" satisfied condition "Succeeded or Failed"
Sep 19 13:36:11.090: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-lmnw container test-container-subpath-preprovisionedpv-lmnw: <nil>
STEP: delete the pod
Sep 19 13:36:11.315: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lmnw to disappear
Sep 19 13:36:11.426: INFO: Pod pod-subpath-test-preprovisionedpv-lmnw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lmnw
Sep 19 13:36:11.427: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lmnw" in namespace "provisioning-296"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:13.766: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:36:10.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9" in namespace "downward-api-7067" to be "Succeeded or Failed"
Sep 19 13:36:10.390: INFO: Pod "downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9": Phase="Pending", Reason="", readiness=false. Elapsed: 108.163682ms
Sep 19 13:36:12.500: INFO: Pod "downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218191912s
Sep 19 13:36:14.609: INFO: Pod "downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327530535s
STEP: Saw pod success
Sep 19 13:36:14.609: INFO: Pod "downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9" satisfied condition "Succeeded or Failed"
Sep 19 13:36:14.721: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9 container client-container: <nil>
STEP: delete the pod
Sep 19 13:36:14.961: INFO: Waiting for pod downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9 to disappear
Sep 19 13:36:15.069: INFO: Pod downwardapi-volume-7aea891e-f2e2-4242-a121-ac161ce4dec9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.682 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 56 lines ...
Sep 19 13:35:53.617: INFO: PersistentVolumeClaim pvc-nc5z8 found and phase=Bound (108.582939ms)
Sep 19 13:35:53.617: INFO: Waiting up to 3m0s for PersistentVolume nfs-kw7z2 to have phase Bound
Sep 19 13:35:53.728: INFO: PersistentVolume nfs-kw7z2 found and phase=Bound (110.257381ms)
STEP: Checking pod has write access to PersistentVolume
Sep 19 13:35:53.944: INFO: Creating nfs test pod
Sep 19 13:35:54.058: INFO: Pod should terminate with exitcode 0 (success)
Sep 19 13:35:54.058: INFO: Waiting up to 5m0s for pod "pvc-tester-vfw8l" in namespace "pv-3928" to be "Succeeded or Failed"
Sep 19 13:35:54.166: INFO: Pod "pvc-tester-vfw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 107.49189ms
Sep 19 13:35:56.274: INFO: Pod "pvc-tester-vfw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215852856s
Sep 19 13:35:58.383: INFO: Pod "pvc-tester-vfw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324754245s
Sep 19 13:36:00.492: INFO: Pod "pvc-tester-vfw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433873305s
Sep 19 13:36:02.602: INFO: Pod "pvc-tester-vfw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543580371s
Sep 19 13:36:04.711: INFO: Pod "pvc-tester-vfw8l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652097509s
Sep 19 13:36:06.820: INFO: Pod "pvc-tester-vfw8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.76151599s
STEP: Saw pod success
Sep 19 13:36:06.820: INFO: Pod "pvc-tester-vfw8l" satisfied condition "Succeeded or Failed"
Sep 19 13:36:06.820: INFO: Pod pvc-tester-vfw8l succeeded 
Sep 19 13:36:06.820: INFO: Deleting pod "pvc-tester-vfw8l" in namespace "pv-3928"
Sep 19 13:36:06.934: INFO: Wait up to 5m0s for pod "pvc-tester-vfw8l" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 19 13:36:07.042: INFO: Deleting PVC pvc-nc5z8 to trigger reclamation of PV 
Sep 19 13:36:07.042: INFO: Deleting PersistentVolumeClaim "pvc-nc5z8"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:16.200: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
• [SLOW TEST:5.817 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":1,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:22.097: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 154 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":16,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:26.757: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":76,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:36:05.942: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:22.056 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":6,"skipped":76,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:28.021: INFO: Only supported for providers [azure] (not aws)
... skipping 102 lines ...
Sep 19 13:35:53.910: INFO: PersistentVolumeClaim csi-hostpathvkpgt found but phase is Pending instead of Bound.
Sep 19 13:35:56.021: INFO: PersistentVolumeClaim csi-hostpathvkpgt found but phase is Pending instead of Bound.
Sep 19 13:35:58.132: INFO: PersistentVolumeClaim csi-hostpathvkpgt found but phase is Pending instead of Bound.
Sep 19 13:36:00.242: INFO: PersistentVolumeClaim csi-hostpathvkpgt found and phase=Bound (42.320473966s)
STEP: Creating pod pod-subpath-test-dynamicpv-n7xb
STEP: Creating a pod to test subpath
Sep 19 13:36:00.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-n7xb" in namespace "provisioning-8117" to be "Succeeded or Failed"
Sep 19 13:36:00.680: INFO: Pod "pod-subpath-test-dynamicpv-n7xb": Phase="Pending", Reason="", readiness=false. Elapsed: 109.028527ms
Sep 19 13:36:02.790: INFO: Pod "pod-subpath-test-dynamicpv-n7xb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218757188s
Sep 19 13:36:04.900: INFO: Pod "pod-subpath-test-dynamicpv-n7xb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329091376s
Sep 19 13:36:07.010: INFO: Pod "pod-subpath-test-dynamicpv-n7xb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438995281s
Sep 19 13:36:09.121: INFO: Pod "pod-subpath-test-dynamicpv-n7xb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549274224s
Sep 19 13:36:11.232: INFO: Pod "pod-subpath-test-dynamicpv-n7xb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.6610322s
STEP: Saw pod success
Sep 19 13:36:11.232: INFO: Pod "pod-subpath-test-dynamicpv-n7xb" satisfied condition "Succeeded or Failed"
Sep 19 13:36:11.342: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-n7xb container test-container-volume-dynamicpv-n7xb: <nil>
STEP: delete the pod
Sep 19 13:36:11.579: INFO: Waiting for pod pod-subpath-test-dynamicpv-n7xb to disappear
Sep 19 13:36:11.688: INFO: Pod pod-subpath-test-dynamicpv-n7xb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-n7xb
Sep 19 13:36:11.688: INFO: Deleting pod "pod-subpath-test-dynamicpv-n7xb" in namespace "provisioning-8117"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":11,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:29.766: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 129 lines ...
• [SLOW TEST:17.623 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":6,"skipped":29,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":3,"skipped":4,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:36:16.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":4,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:8.220 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:35.023: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 178 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 19 13:36:28.698: INFO: Waiting up to 5m0s for pod "pod-5009370c-13c1-42e3-a8b1-95fa38dd4092" in namespace "emptydir-8691" to be "Succeeded or Failed"
Sep 19 13:36:28.806: INFO: Pod "pod-5009370c-13c1-42e3-a8b1-95fa38dd4092": Phase="Pending", Reason="", readiness=false. Elapsed: 107.744621ms
Sep 19 13:36:30.916: INFO: Pod "pod-5009370c-13c1-42e3-a8b1-95fa38dd4092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217385147s
Sep 19 13:36:33.025: INFO: Pod "pod-5009370c-13c1-42e3-a8b1-95fa38dd4092": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326549392s
Sep 19 13:36:35.134: INFO: Pod "pod-5009370c-13c1-42e3-a8b1-95fa38dd4092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.435655945s
STEP: Saw pod success
Sep 19 13:36:35.134: INFO: Pod "pod-5009370c-13c1-42e3-a8b1-95fa38dd4092" satisfied condition "Succeeded or Failed"
Sep 19 13:36:35.242: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-5009370c-13c1-42e3-a8b1-95fa38dd4092 container test-container: <nil>
STEP: delete the pod
Sep 19 13:36:35.467: INFO: Waiting for pod pod-5009370c-13c1-42e3-a8b1-95fa38dd4092 to disappear
Sep 19 13:36:35.575: INFO: Pod pod-5009370c-13c1-42e3-a8b1-95fa38dd4092 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":7,"skipped":81,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:35.805: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 61 lines ...
Sep 19 13:36:36.624: INFO: AfterEach: Cleaning up test resources.
Sep 19 13:36:36.624: INFO: pvc is nil
Sep 19 13:36:36.624: INFO: Deleting PersistentVolume "hostpath-t77fd"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":4,"skipped":54,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:36.774: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 70 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 80 lines ...
Sep 19 13:35:45.611: INFO: PersistentVolumeClaim csi-hostpathtrhmx found but phase is Pending instead of Bound.
Sep 19 13:35:47.722: INFO: PersistentVolumeClaim csi-hostpathtrhmx found but phase is Pending instead of Bound.
Sep 19 13:35:49.830: INFO: PersistentVolumeClaim csi-hostpathtrhmx found but phase is Pending instead of Bound.
Sep 19 13:35:51.938: INFO: PersistentVolumeClaim csi-hostpathtrhmx found and phase=Bound (33.850911625s)
STEP: Expanding non-expandable pvc
Sep 19 13:35:52.153: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep 19 13:35:52.377: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:35:54.595: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:35:56.620: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:35:58.594: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:00.598: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:02.594: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:04.594: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:06.594: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:08.594: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:10.593: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:12.594: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:14.600: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:16.644: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:18.594: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:20.595: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:22.598: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Sep 19 13:36:22.819: INFO: Error updating pvc csi-hostpathtrhmx: persistentvolumeclaims "csi-hostpathtrhmx" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Sep 19 13:36:22.819: INFO: Deleting PersistentVolumeClaim "csi-hostpathtrhmx"
Sep 19 13:36:22.928: INFO: Waiting up to 5m0s for PersistentVolume pvc-ab0832a9-19cb-4600-9079-7376ad17e96e to get deleted
Sep 19 13:36:23.036: INFO: PersistentVolume pvc-ab0832a9-19cb-4600-9079-7376ad17e96e was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-8775
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:36:32.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-e2b83cb9-7bd4-439f-9ec6-c52a11c10202
STEP: Creating a pod to test consume secrets
Sep 19 13:36:33.767: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891" in namespace "projected-4996" to be "Succeeded or Failed"
Sep 19 13:36:33.876: INFO: Pod "pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891": Phase="Pending", Reason="", readiness=false. Elapsed: 109.28355ms
Sep 19 13:36:35.985: INFO: Pod "pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218490011s
Sep 19 13:36:38.100: INFO: Pod "pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333283932s
Sep 19 13:36:40.211: INFO: Pod "pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.444906735s
STEP: Saw pod success
Sep 19 13:36:40.212: INFO: Pod "pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891" satisfied condition "Succeeded or Failed"
Sep 19 13:36:40.320: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891 container secret-volume-test: <nil>
STEP: delete the pod
Sep 19 13:36:40.544: INFO: Waiting for pod pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891 to disappear
Sep 19 13:36:40.653: INFO: Pod pod-projected-secrets-ce55f90c-e1c6-4354-8b03-c40178d37891 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.879 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:36:01.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 19 13:36:01.899: INFO: created pod
Sep 19 13:36:01.899: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-4209" to be "Succeeded or Failed"
Sep 19 13:36:02.043: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 143.680148ms
Sep 19 13:36:04.153: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.253773543s
Sep 19 13:36:06.263: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364071058s
Sep 19 13:36:08.373: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473192527s
Sep 19 13:36:10.482: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.582776262s
STEP: Saw pod success
Sep 19 13:36:10.482: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Sep 19 13:36:40.485: INFO: polling logs
Sep 19 13:36:40.596: INFO: Pod logs: 
2021/09/19 13:36:03 OK: Got token
2021/09/19 13:36:03 validating with in-cluster discovery
2021/09/19 13:36:03 OK: got issuer https://api.internal.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io
2021/09/19 13:36:03 Full, not-validated claims: 
... skipping 14 lines ...
• [SLOW TEST:39.869 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":3,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:41.328: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
Sep 19 13:36:36.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Sep 19 13:36:37.381: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 19 13:36:37.607: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4498" in namespace "provisioning-4498" to be "Succeeded or Failed"
Sep 19 13:36:37.716: INFO: Pod "hostpath-symlink-prep-provisioning-4498": Phase="Pending", Reason="", readiness=false. Elapsed: 109.077239ms
Sep 19 13:36:39.827: INFO: Pod "hostpath-symlink-prep-provisioning-4498": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220330944s
Sep 19 13:36:41.938: INFO: Pod "hostpath-symlink-prep-provisioning-4498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.330768451s
STEP: Saw pod success
Sep 19 13:36:41.938: INFO: Pod "hostpath-symlink-prep-provisioning-4498" satisfied condition "Succeeded or Failed"
Sep 19 13:36:41.938: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4498" in namespace "provisioning-4498"
Sep 19 13:36:42.056: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4498" to be fully deleted
Sep 19 13:36:42.165: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8dw2
STEP: Creating a pod to test subpath
Sep 19 13:36:42.306: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8dw2" in namespace "provisioning-4498" to be "Succeeded or Failed"
Sep 19 13:36:42.423: INFO: Pod "pod-subpath-test-inlinevolume-8dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 117.640564ms
Sep 19 13:36:44.534: INFO: Pod "pod-subpath-test-inlinevolume-8dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228734909s
Sep 19 13:36:46.645: INFO: Pod "pod-subpath-test-inlinevolume-8dw2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338836129s
Sep 19 13:36:48.757: INFO: Pod "pod-subpath-test-inlinevolume-8dw2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.451584769s
STEP: Saw pod success
Sep 19 13:36:48.757: INFO: Pod "pod-subpath-test-inlinevolume-8dw2" satisfied condition "Succeeded or Failed"
Sep 19 13:36:48.867: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-8dw2 container test-container-subpath-inlinevolume-8dw2: <nil>
STEP: delete the pod
Sep 19 13:36:49.092: INFO: Waiting for pod pod-subpath-test-inlinevolume-8dw2 to disappear
Sep 19 13:36:49.201: INFO: Pod pod-subpath-test-inlinevolume-8dw2 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8dw2
Sep 19 13:36:49.201: INFO: Deleting pod "pod-subpath-test-inlinevolume-8dw2" in namespace "provisioning-4498"
STEP: Deleting pod
Sep 19 13:36:49.311: INFO: Deleting pod "pod-subpath-test-inlinevolume-8dw2" in namespace "provisioning-4498"
Sep 19 13:36:49.530: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4498" in namespace "provisioning-4498" to be "Succeeded or Failed"
Sep 19 13:36:49.640: INFO: Pod "hostpath-symlink-prep-provisioning-4498": Phase="Pending", Reason="", readiness=false. Elapsed: 109.80531ms
Sep 19 13:36:51.751: INFO: Pod "hostpath-symlink-prep-provisioning-4498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220820362s
STEP: Saw pod success
Sep 19 13:36:51.751: INFO: Pod "hostpath-symlink-prep-provisioning-4498" satisfied condition "Succeeded or Failed"
Sep 19 13:36:51.751: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4498" in namespace "provisioning-4498"
Sep 19 13:36:51.870: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4498" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:36:51.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4498" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Sep 19 13:36:22.413: INFO: PersistentVolumeClaim pvc-gj56n found but phase is Pending instead of Bound.
Sep 19 13:36:24.526: INFO: PersistentVolumeClaim pvc-gj56n found and phase=Bound (14.895954259s)
Sep 19 13:36:24.526: INFO: Waiting up to 3m0s for PersistentVolume local-pmbtm to have phase Bound
Sep 19 13:36:24.637: INFO: PersistentVolume local-pmbtm found and phase=Bound (111.545496ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5ddd
STEP: Creating a pod to test atomic-volume-subpath
Sep 19 13:36:24.970: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5ddd" in namespace "provisioning-8464" to be "Succeeded or Failed"
Sep 19 13:36:25.082: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 111.13201ms
Sep 19 13:36:27.193: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222848374s
Sep 19 13:36:29.305: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334332797s
Sep 19 13:36:31.417: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445985346s
Sep 19 13:36:33.529: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.558242178s
Sep 19 13:36:35.640: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Running", Reason="", readiness=true. Elapsed: 10.669774004s
... skipping 2 lines ...
Sep 19 13:36:41.974: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Running", Reason="", readiness=true. Elapsed: 17.003862844s
Sep 19 13:36:44.086: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Running", Reason="", readiness=true. Elapsed: 19.115269873s
Sep 19 13:36:46.198: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Running", Reason="", readiness=true. Elapsed: 21.227266782s
Sep 19 13:36:48.310: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Running", Reason="", readiness=true. Elapsed: 23.33928056s
Sep 19 13:36:50.422: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.451378444s
STEP: Saw pod success
Sep 19 13:36:50.422: INFO: Pod "pod-subpath-test-preprovisionedpv-5ddd" satisfied condition "Succeeded or Failed"
Sep 19 13:36:50.533: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-5ddd container test-container-subpath-preprovisionedpv-5ddd: <nil>
STEP: delete the pod
Sep 19 13:36:50.762: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5ddd to disappear
Sep 19 13:36:50.873: INFO: Pod pod-subpath-test-preprovisionedpv-5ddd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5ddd
Sep 19 13:36:50.873: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5ddd" in namespace "provisioning-8464"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:36:53.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-963" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":3,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:53.452: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] IngressClass API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:36:55.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-5194" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
Sep 19 13:36:07.716: INFO: PersistentVolumeClaim pvc-7klpw found and phase=Bound (113.703383ms)
STEP: Deleting the previously created pod
Sep 19 13:36:33.268: INFO: Deleting pod "pvc-volume-tester-jk7qg" in namespace "csi-mock-volumes-88"
Sep 19 13:36:33.379: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jk7qg" to be fully deleted
STEP: Checking CSI driver logs
Sep 19 13:36:37.718: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IlhhaHplVzZSOGtKWHB6Rm5JYm9zZDJqamZEVEk2c0tQbFE2cjRZR3RjYTAifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MzIwNTkxODMsImlhdCI6MTYzMjA1ODU4MywiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLWFlYzI3YzhjNjEtYjE3MmQudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtODgiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLWprN3FnIiwidWlkIjoiNzlhZDZiMDItYjQyMy00YTY2LTk5NWQtZGQxMTQ2ZDk4YzQxIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiNGIzZWYyY2MtODc2Ni00MTYxLWI0YTItNjdlY2UzZTY1MzYzIn19LCJuYmYiOjE2MzIwNTg1ODMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTg4OmRlZmF1bHQifQ.rzXZHv4M7QXq1Cu9hgHbdHlo5sDjj-hbg-dcYGm_9O2P4gap_sUtsarLoJ54SNJvIxtuFaXO0-YmXAbqjVIQX4aOEBtFHFFFZPyCX-tVlsjQ0bZBPjM49yoM9ImbfSVSgQ0W5hFlbXT4bZ73bHBWlX4L_qlkg6beQMMVsNMQ0v3ono088GM0kJwss4aa2q9ptNvLPLi0urnBX6j9PomwutMVUv3fBS1pmmxVZTRlBCc9fjqSK_tGAB6lau3KYFoRxBW6Bmeysv9J-BhYbtZP3EeH2ARLUzTBg369kKKW7VjT1iJiPbmo7ET-MUTR7jtJZgFV9a2J8z79Io4NN_RkdA","expirationTimestamp":"2021-09-19T13:46:23Z"}}
Sep 19 13:36:37.718: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/79ad6b02-b423-4a66-995d-dd1146d98c41/volumes/kubernetes.io~csi/pvc-ce051c94-735d-45b7-9844-17e55c9f66e4/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-jk7qg
Sep 19 13:36:37.718: INFO: Deleting pod "pvc-volume-tester-jk7qg" in namespace "csi-mock-volumes-88"
STEP: Deleting claim pvc-7klpw
Sep 19 13:36:38.051: INFO: Waiting up to 2m0s for PersistentVolume pvc-ce051c94-735d-45b7-9844-17e55c9f66e4 to get deleted
Sep 19 13:36:38.160: INFO: PersistentVolume pvc-ce051c94-735d-45b7-9844-17e55c9f66e4 was removed
STEP: Deleting storageclass csi-mock-volumes-88-sc2sb5v
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":4,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:56.448: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 93 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 63 lines ...
Sep 19 13:36:25.584: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep 19 13:36:25.695: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathb5svk] to have phase Bound
Sep 19 13:36:25.804: INFO: PersistentVolumeClaim csi-hostpathb5svk found but phase is Pending instead of Bound.
Sep 19 13:36:27.913: INFO: PersistentVolumeClaim csi-hostpathb5svk found and phase=Bound (2.218427826s)
STEP: Creating pod pod-subpath-test-dynamicpv-qqkz
STEP: Creating a pod to test subpath
Sep 19 13:36:28.248: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-qqkz" in namespace "provisioning-582" to be "Succeeded or Failed"
Sep 19 13:36:28.364: INFO: Pod "pod-subpath-test-dynamicpv-qqkz": Phase="Pending", Reason="", readiness=false. Elapsed: 116.016619ms
Sep 19 13:36:30.475: INFO: Pod "pod-subpath-test-dynamicpv-qqkz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226629785s
Sep 19 13:36:32.584: INFO: Pod "pod-subpath-test-dynamicpv-qqkz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336082874s
Sep 19 13:36:34.694: INFO: Pod "pod-subpath-test-dynamicpv-qqkz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446158982s
Sep 19 13:36:36.804: INFO: Pod "pod-subpath-test-dynamicpv-qqkz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.556520014s
STEP: Saw pod success
Sep 19 13:36:36.805: INFO: Pod "pod-subpath-test-dynamicpv-qqkz" satisfied condition "Succeeded or Failed"
Sep 19 13:36:36.914: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-qqkz container test-container-subpath-dynamicpv-qqkz: <nil>
STEP: delete the pod
Sep 19 13:36:37.161: INFO: Waiting for pod pod-subpath-test-dynamicpv-qqkz to disappear
Sep 19 13:36:37.272: INFO: Pod pod-subpath-test-dynamicpv-qqkz no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-qqkz
Sep 19 13:36:37.273: INFO: Deleting pod "pod-subpath-test-dynamicpv-qqkz" in namespace "provisioning-582"
... skipping 95 lines ...
• [SLOW TEST:16.110 seconds]
[sig-api-machinery] Aggregator
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:57.468: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 131 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":8,"skipped":37,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:36:53.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 19 13:36:54.192: INFO: Waiting up to 5m0s for pod "pod-d58472fc-9352-43c2-b450-84afb444fc10" in namespace "emptydir-3545" to be "Succeeded or Failed"
Sep 19 13:36:54.305: INFO: Pod "pod-d58472fc-9352-43c2-b450-84afb444fc10": Phase="Pending", Reason="", readiness=false. Elapsed: 113.696361ms
Sep 19 13:36:56.417: INFO: Pod "pod-d58472fc-9352-43c2-b450-84afb444fc10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225464753s
Sep 19 13:36:58.531: INFO: Pod "pod-d58472fc-9352-43c2-b450-84afb444fc10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.339364024s
STEP: Saw pod success
Sep 19 13:36:58.531: INFO: Pod "pod-d58472fc-9352-43c2-b450-84afb444fc10" satisfied condition "Succeeded or Failed"
Sep 19 13:36:58.641: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-d58472fc-9352-43c2-b450-84afb444fc10 container test-container: <nil>
STEP: delete the pod
Sep 19 13:36:58.887: INFO: Waiting for pod pod-d58472fc-9352-43c2-b450-84afb444fc10 to disappear
Sep 19 13:36:58.998: INFO: Pod pod-d58472fc-9352-43c2-b450-84afb444fc10 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.711 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:59.258: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":5,"skipped":58,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:36:59.890: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 34 lines ...
• [SLOW TEST:5.222 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":48,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:02.793: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
• [SLOW TEST:61.045 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:03.124: INFO: Only supported for providers [gce gke] (not aws)
... skipping 23 lines ...
Sep 19 13:36:59.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep 19 13:36:59.934: INFO: Waiting up to 5m0s for pod "test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973" in namespace "svcaccounts-2724" to be "Succeeded or Failed"
Sep 19 13:37:00.044: INFO: Pod "test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973": Phase="Pending", Reason="", readiness=false. Elapsed: 110.105062ms
Sep 19 13:37:02.155: INFO: Pod "test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221109514s
Sep 19 13:37:04.267: INFO: Pod "test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332855082s
STEP: Saw pod success
Sep 19 13:37:04.267: INFO: Pod "test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973" satisfied condition "Succeeded or Failed"
Sep 19 13:37:04.377: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:37:04.602: INFO: Waiting for pod test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973 to disappear
Sep 19 13:37:04.712: INFO: Pod test-pod-a15f2596-a85d-43b8-a88d-d8052bd6b973 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:05.574: INFO: Only supported for providers [azure] (not aws)
... skipping 68 lines ...
Sep 19 13:36:51.091: INFO: PersistentVolumeClaim pvc-lzlws found but phase is Pending instead of Bound.
Sep 19 13:36:53.199: INFO: PersistentVolumeClaim pvc-lzlws found and phase=Bound (10.653711789s)
Sep 19 13:36:53.200: INFO: Waiting up to 3m0s for PersistentVolume local-jn8qq to have phase Bound
Sep 19 13:36:53.308: INFO: PersistentVolume local-jn8qq found and phase=Bound (107.998256ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rsxb
STEP: Creating a pod to test subpath
Sep 19 13:36:53.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rsxb" in namespace "provisioning-1520" to be "Succeeded or Failed"
Sep 19 13:36:53.750: INFO: Pod "pod-subpath-test-preprovisionedpv-rsxb": Phase="Pending", Reason="", readiness=false. Elapsed: 107.97142ms
Sep 19 13:36:55.877: INFO: Pod "pod-subpath-test-preprovisionedpv-rsxb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234912877s
Sep 19 13:36:57.986: INFO: Pod "pod-subpath-test-preprovisionedpv-rsxb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343679595s
Sep 19 13:37:00.095: INFO: Pod "pod-subpath-test-preprovisionedpv-rsxb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.452912439s
Sep 19 13:37:02.205: INFO: Pod "pod-subpath-test-preprovisionedpv-rsxb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.562877687s
STEP: Saw pod success
Sep 19 13:37:02.205: INFO: Pod "pod-subpath-test-preprovisionedpv-rsxb" satisfied condition "Succeeded or Failed"
Sep 19 13:37:02.317: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-rsxb container test-container-volume-preprovisionedpv-rsxb: <nil>
STEP: delete the pod
Sep 19 13:37:02.565: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rsxb to disappear
Sep 19 13:37:02.674: INFO: Pod pod-subpath-test-preprovisionedpv-rsxb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rsxb
Sep 19 13:37:02.674: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rsxb" in namespace "provisioning-1520"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:05.727: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 127 lines ...
Sep 19 13:37:03.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Sep 19 13:37:03.794: INFO: Waiting up to 5m0s for pod "var-expansion-75643cd8-6c2b-4525-a6a5-b6472869ecb4" in namespace "var-expansion-8579" to be "Succeeded or Failed"
Sep 19 13:37:03.904: INFO: Pod "var-expansion-75643cd8-6c2b-4525-a6a5-b6472869ecb4": Phase="Pending", Reason="", readiness=false. Elapsed: 109.57345ms
Sep 19 13:37:06.013: INFO: Pod "var-expansion-75643cd8-6c2b-4525-a6a5-b6472869ecb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219070889s
STEP: Saw pod success
Sep 19 13:37:06.013: INFO: Pod "var-expansion-75643cd8-6c2b-4525-a6a5-b6472869ecb4" satisfied condition "Succeeded or Failed"
Sep 19 13:37:06.123: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod var-expansion-75643cd8-6c2b-4525-a6a5-b6472869ecb4 container dapi-container: <nil>
STEP: delete the pod
Sep 19 13:37:06.347: INFO: Waiting for pod var-expansion-75643cd8-6c2b-4525-a6a5-b6472869ecb4 to disappear
Sep 19 13:37:06.455: INFO: Pod var-expansion-75643cd8-6c2b-4525-a6a5-b6472869ecb4 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:06.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8579" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 59 lines ...
Sep 19 13:36:29.359: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathrcprv] to have phase Bound
Sep 19 13:36:29.469: INFO: PersistentVolumeClaim csi-hostpathrcprv found but phase is Pending instead of Bound.
Sep 19 13:36:31.580: INFO: PersistentVolumeClaim csi-hostpathrcprv found but phase is Pending instead of Bound.
Sep 19 13:36:33.691: INFO: PersistentVolumeClaim csi-hostpathrcprv found and phase=Bound (4.33209956s)
STEP: Creating pod pod-subpath-test-dynamicpv-gff6
STEP: Creating a pod to test subpath
Sep 19 13:36:34.024: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-gff6" in namespace "provisioning-996" to be "Succeeded or Failed"
Sep 19 13:36:34.134: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Pending", Reason="", readiness=false. Elapsed: 110.16756ms
Sep 19 13:36:36.246: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221949544s
Sep 19 13:36:38.359: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334670639s
Sep 19 13:36:40.473: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448974336s
Sep 19 13:36:42.584: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559638873s
Sep 19 13:36:44.695: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.670884731s
Sep 19 13:36:46.806: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.782083492s
Sep 19 13:36:48.916: INFO: Pod "pod-subpath-test-dynamicpv-gff6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.892439499s
STEP: Saw pod success
Sep 19 13:36:48.917: INFO: Pod "pod-subpath-test-dynamicpv-gff6" satisfied condition "Succeeded or Failed"
Sep 19 13:36:49.027: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-gff6 container test-container-subpath-dynamicpv-gff6: <nil>
STEP: delete the pod
Sep 19 13:36:49.256: INFO: Waiting for pod pod-subpath-test-dynamicpv-gff6 to disappear
Sep 19 13:36:49.366: INFO: Pod pod-subpath-test-dynamicpv-gff6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-gff6
Sep 19 13:36:49.366: INFO: Deleting pod "pod-subpath-test-dynamicpv-gff6" in namespace "provisioning-996"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:06.909: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 178 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:36:56.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
Sep 19 13:36:57.304: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 19 13:36:57.531: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7151" in namespace "provisioning-7151" to be "Succeeded or Failed"
Sep 19 13:36:57.640: INFO: Pod "hostpath-symlink-prep-provisioning-7151": Phase="Pending", Reason="", readiness=false. Elapsed: 108.852866ms
Sep 19 13:36:59.750: INFO: Pod "hostpath-symlink-prep-provisioning-7151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218090757s
Sep 19 13:37:01.860: INFO: Pod "hostpath-symlink-prep-provisioning-7151": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328430287s
Sep 19 13:37:03.969: INFO: Pod "hostpath-symlink-prep-provisioning-7151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437854875s
STEP: Saw pod success
Sep 19 13:37:03.970: INFO: Pod "hostpath-symlink-prep-provisioning-7151" satisfied condition "Succeeded or Failed"
Sep 19 13:37:03.970: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7151" in namespace "provisioning-7151"
Sep 19 13:37:04.089: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7151" to be fully deleted
Sep 19 13:37:04.198: INFO: Creating resource for inline volume
Sep 19 13:37:04.198: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Sep 19 13:37:04.198: INFO: Deleting pod "pod-subpath-test-inlinevolume-nsh7" in namespace "provisioning-7151"
Sep 19 13:37:04.419: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7151" in namespace "provisioning-7151" to be "Succeeded or Failed"
Sep 19 13:37:04.530: INFO: Pod "hostpath-symlink-prep-provisioning-7151": Phase="Pending", Reason="", readiness=false. Elapsed: 111.363947ms
Sep 19 13:37:06.640: INFO: Pod "hostpath-symlink-prep-provisioning-7151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221286673s
Sep 19 13:37:08.750: INFO: Pod "hostpath-symlink-prep-provisioning-7151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331436338s
STEP: Saw pod success
Sep 19 13:37:08.750: INFO: Pod "hostpath-symlink-prep-provisioning-7151" satisfied condition "Succeeded or Failed"
Sep 19 13:37:08.750: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7151" in namespace "provisioning-7151"
Sep 19 13:37:08.876: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7151" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:08.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7151" for this suite.
... skipping 18 lines ...
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:36.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-6062" for this suite.


• [SLOW TEST:93.531 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:289
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":2,"skipped":61,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:09.660: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 72 lines ...
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 19 13:37:06.372: INFO: The status of Pod server-envvars-a5bb9568-eab8-4262-a59e-eeeea3c9e488 is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:37:08.485: INFO: The status of Pod server-envvars-a5bb9568-eab8-4262-a59e-eeeea3c9e488 is Running (Ready = true)
Sep 19 13:37:08.821: INFO: Waiting up to 5m0s for pod "client-envvars-02b388d8-064e-43bd-b013-dcd4599d0aab" in namespace "pods-343" to be "Succeeded or Failed"
Sep 19 13:37:08.930: INFO: Pod "client-envvars-02b388d8-064e-43bd-b013-dcd4599d0aab": Phase="Pending", Reason="", readiness=false. Elapsed: 107.996988ms
Sep 19 13:37:11.039: INFO: Pod "client-envvars-02b388d8-064e-43bd-b013-dcd4599d0aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.217117826s
STEP: Saw pod success
Sep 19 13:37:11.039: INFO: Pod "client-envvars-02b388d8-064e-43bd-b013-dcd4599d0aab" satisfied condition "Succeeded or Failed"
Sep 19 13:37:11.148: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod client-envvars-02b388d8-064e-43bd-b013-dcd4599d0aab container env3cont: <nil>
STEP: delete the pod
Sep 19 13:37:11.371: INFO: Waiting for pod client-envvars-02b388d8-064e-43bd-b013-dcd4599d0aab to disappear
Sep 19 13:37:11.480: INFO: Pod client-envvars-02b388d8-064e-43bd-b013-dcd4599d0aab no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.093 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:11.712: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":4,"skipped":33,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:12.517: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 37 lines ...
• [SLOW TEST:19.862 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:09.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 19 13:37:09.942: INFO: Waiting up to 5m0s for pod "pod-aba428cf-d736-412c-a7a3-3772c2907f69" in namespace "emptydir-6235" to be "Succeeded or Failed"
Sep 19 13:37:10.054: INFO: Pod "pod-aba428cf-d736-412c-a7a3-3772c2907f69": Phase="Pending", Reason="", readiness=false. Elapsed: 111.600715ms
Sep 19 13:37:12.163: INFO: Pod "pod-aba428cf-d736-412c-a7a3-3772c2907f69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221360905s
Sep 19 13:37:14.273: INFO: Pod "pod-aba428cf-d736-412c-a7a3-3772c2907f69": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331217304s
Sep 19 13:37:16.384: INFO: Pod "pod-aba428cf-d736-412c-a7a3-3772c2907f69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441989177s
STEP: Saw pod success
Sep 19 13:37:16.384: INFO: Pod "pod-aba428cf-d736-412c-a7a3-3772c2907f69" satisfied condition "Succeeded or Failed"
Sep 19 13:37:16.494: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-aba428cf-d736-412c-a7a3-3772c2907f69 container test-container: <nil>
STEP: delete the pod
Sep 19 13:37:16.718: INFO: Waiting for pod pod-aba428cf-d736-412c-a7a3-3772c2907f69 to disappear
Sep 19 13:37:16.830: INFO: Pod pod-aba428cf-d736-412c-a7a3-3772c2907f69 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.772 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:17.066: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 113 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:500
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command","total":-1,"completed":6,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:17.184: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
• [SLOW TEST:26.728 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:280
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":6,"skipped":71,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:12.073 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":5,"skipped":86,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256
    CSIStorageCapacity used, no capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:25.962: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
Sep 19 13:37:07.187: INFO: PersistentVolumeClaim pvc-4g5s5 found but phase is Pending instead of Bound.
Sep 19 13:37:09.296: INFO: PersistentVolumeClaim pvc-4g5s5 found and phase=Bound (6.438742848s)
Sep 19 13:37:09.296: INFO: Waiting up to 3m0s for PersistentVolume local-s9jwd to have phase Bound
Sep 19 13:37:09.404: INFO: PersistentVolume local-s9jwd found and phase=Bound (107.843609ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-758v
STEP: Creating a pod to test subpath
Sep 19 13:37:09.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-758v" in namespace "provisioning-4360" to be "Succeeded or Failed"
Sep 19 13:37:09.848: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Pending", Reason="", readiness=false. Elapsed: 108.653879ms
Sep 19 13:37:11.957: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217758435s
Sep 19 13:37:14.068: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328487929s
Sep 19 13:37:16.177: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436921829s
Sep 19 13:37:18.286: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545995516s
Sep 19 13:37:20.395: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65553196s
Sep 19 13:37:22.505: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Pending", Reason="", readiness=false. Elapsed: 12.765192447s
Sep 19 13:37:24.614: INFO: Pod "pod-subpath-test-preprovisionedpv-758v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.874498918s
STEP: Saw pod success
Sep 19 13:37:24.614: INFO: Pod "pod-subpath-test-preprovisionedpv-758v" satisfied condition "Succeeded or Failed"
Sep 19 13:37:24.723: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-758v container test-container-volume-preprovisionedpv-758v: <nil>
STEP: delete the pod
Sep 19 13:37:24.949: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-758v to disappear
Sep 19 13:37:25.058: INFO: Pod pod-subpath-test-preprovisionedpv-758v no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-758v
Sep 19 13:37:25.058: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-758v" in namespace "provisioning-4360"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:26.564: INFO: Only supported for providers [azure] (not aws)
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:479
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":6,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
• [SLOW TEST:14.358 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:28.879: INFO: Only supported for providers [gce gke] (not aws)
... skipping 66 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-47fca1a0-a641-4a44-9d8f-8cec6f930f1d
STEP: Creating a pod to test consume configMaps
Sep 19 13:37:19.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d" in namespace "projected-8350" to be "Succeeded or Failed"
Sep 19 13:37:20.016: INFO: Pod "pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 110.227887ms
Sep 19 13:37:22.127: INFO: Pod "pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221312977s
Sep 19 13:37:24.238: INFO: Pod "pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332699397s
Sep 19 13:37:26.350: INFO: Pod "pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444261431s
Sep 19 13:37:28.461: INFO: Pod "pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.555718031s
STEP: Saw pod success
Sep 19 13:37:28.461: INFO: Pod "pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d" satisfied condition "Succeeded or Failed"
Sep 19 13:37:28.572: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:37:28.803: INFO: Waiting for pod pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d to disappear
Sep 19 13:37:28.913: INFO: Pod pod-projected-configmaps-f95d2c1d-fa44-46c6-b061-e1c900d4fb5d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.007 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":91,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:29.158: INFO: Driver "local" does not provide raw block - skipping
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:29.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-3473" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":6,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Sep 19 13:37:21.096: INFO: PersistentVolumeClaim pvc-sbp7r found but phase is Pending instead of Bound.
Sep 19 13:37:23.205: INFO: PersistentVolumeClaim pvc-sbp7r found and phase=Bound (8.545232083s)
Sep 19 13:37:23.205: INFO: Waiting up to 3m0s for PersistentVolume local-tlrsz to have phase Bound
Sep 19 13:37:23.316: INFO: PersistentVolume local-tlrsz found and phase=Bound (110.671037ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-svjg
STEP: Creating a pod to test subpath
Sep 19 13:37:23.648: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-svjg" in namespace "provisioning-7952" to be "Succeeded or Failed"
Sep 19 13:37:23.757: INFO: Pod "pod-subpath-test-preprovisionedpv-svjg": Phase="Pending", Reason="", readiness=false. Elapsed: 108.611115ms
Sep 19 13:37:25.867: INFO: Pod "pod-subpath-test-preprovisionedpv-svjg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21824346s
Sep 19 13:37:27.976: INFO: Pod "pod-subpath-test-preprovisionedpv-svjg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327513756s
STEP: Saw pod success
Sep 19 13:37:27.976: INFO: Pod "pod-subpath-test-preprovisionedpv-svjg" satisfied condition "Succeeded or Failed"
Sep 19 13:37:28.084: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-svjg container test-container-subpath-preprovisionedpv-svjg: <nil>
STEP: delete the pod
Sep 19 13:37:28.334: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-svjg to disappear
Sep 19 13:37:28.442: INFO: Pod pod-subpath-test-preprovisionedpv-svjg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-svjg
Sep 19 13:37:28.443: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-svjg" in namespace "provisioning-7952"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:29.959: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-5cb5e487-e072-4e2d-b217-39237d2603ac
STEP: Creating a pod to test consume configMaps
Sep 19 13:37:26.379: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11" in namespace "projected-7806" to be "Succeeded or Failed"
Sep 19 13:37:26.488: INFO: Pod "pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11": Phase="Pending", Reason="", readiness=false. Elapsed: 109.444931ms
Sep 19 13:37:28.598: INFO: Pod "pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219415381s
Sep 19 13:37:30.708: INFO: Pod "pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329772135s
STEP: Saw pod success
Sep 19 13:37:30.708: INFO: Pod "pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11" satisfied condition "Succeeded or Failed"
Sep 19 13:37:30.818: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:37:31.053: INFO: Waiting for pod pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11 to disappear
Sep 19 13:37:31.166: INFO: Pod pod-projected-configmaps-291587d5-00ae-4650-9ad2-26fc6677de11 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.783 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:31.411: INFO: Only supported for providers [openstack] (not aws)
... skipping 67 lines ...
Sep 19 13:37:26.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 19 13:37:27.299: INFO: Waiting up to 5m0s for pod "var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92" in namespace "var-expansion-7011" to be "Succeeded or Failed"
Sep 19 13:37:27.407: INFO: Pod "var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92": Phase="Pending", Reason="", readiness=false. Elapsed: 108.274254ms
Sep 19 13:37:29.516: INFO: Pod "var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217207646s
Sep 19 13:37:31.625: INFO: Pod "var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.32659432s
STEP: Saw pod success
Sep 19 13:37:31.626: INFO: Pod "var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92" satisfied condition "Succeeded or Failed"
Sep 19 13:37:31.733: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92 container dapi-container: <nil>
STEP: delete the pod
Sep 19 13:37:31.959: INFO: Waiting for pod var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92 to disappear
Sep 19 13:37:32.074: INFO: Pod var-expansion-ee0b465f-c80d-4062-9177-55e590ab6c92 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.655 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:29.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-d88186e4-c1a8-4e0b-a2ed-d7b81f654a6b
STEP: Creating a pod to test consume secrets
Sep 19 13:37:30.747: INFO: Waiting up to 5m0s for pod "pod-secrets-a1550b3e-c417-4f19-8590-b48846d59e17" in namespace "secrets-57" to be "Succeeded or Failed"
Sep 19 13:37:30.856: INFO: Pod "pod-secrets-a1550b3e-c417-4f19-8590-b48846d59e17": Phase="Pending", Reason="", readiness=false. Elapsed: 108.373059ms
Sep 19 13:37:32.964: INFO: Pod "pod-secrets-a1550b3e-c417-4f19-8590-b48846d59e17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.216427508s
STEP: Saw pod success
Sep 19 13:37:32.964: INFO: Pod "pod-secrets-a1550b3e-c417-4f19-8590-b48846d59e17" satisfied condition "Succeeded or Failed"
Sep 19 13:37:33.076: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-secrets-a1550b3e-c417-4f19-8590-b48846d59e17 container secret-volume-test: <nil>
STEP: delete the pod
Sep 19 13:37:33.306: INFO: Waiting for pod pod-secrets-a1550b3e-c417-4f19-8590-b48846d59e17 to disappear
Sep 19 13:37:33.429: INFO: Pod pod-secrets-a1550b3e-c417-4f19-8590-b48846d59e17 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:33.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-57" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:33.661: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 92 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 70 lines ...
Sep 19 13:37:22.159: INFO: PersistentVolumeClaim pvc-nkzpn found but phase is Pending instead of Bound.
Sep 19 13:37:24.268: INFO: PersistentVolumeClaim pvc-nkzpn found and phase=Bound (8.548907945s)
Sep 19 13:37:24.268: INFO: Waiting up to 3m0s for PersistentVolume local-jjlpx to have phase Bound
Sep 19 13:37:24.377: INFO: PersistentVolume local-jjlpx found and phase=Bound (108.531921ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-kl4n
STEP: Creating a pod to test exec-volume-test
Sep 19 13:37:24.704: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-kl4n" in namespace "volume-6604" to be "Succeeded or Failed"
Sep 19 13:37:24.818: INFO: Pod "exec-volume-test-preprovisionedpv-kl4n": Phase="Pending", Reason="", readiness=false. Elapsed: 113.822109ms
Sep 19 13:37:26.927: INFO: Pod "exec-volume-test-preprovisionedpv-kl4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223594793s
Sep 19 13:37:29.040: INFO: Pod "exec-volume-test-preprovisionedpv-kl4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336653655s
Sep 19 13:37:31.150: INFO: Pod "exec-volume-test-preprovisionedpv-kl4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446345433s
STEP: Saw pod success
Sep 19 13:37:31.150: INFO: Pod "exec-volume-test-preprovisionedpv-kl4n" satisfied condition "Succeeded or Failed"
Sep 19 13:37:31.259: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-kl4n container exec-container-preprovisionedpv-kl4n: <nil>
STEP: delete the pod
Sep 19 13:37:31.483: INFO: Waiting for pod exec-volume-test-preprovisionedpv-kl4n to disappear
Sep 19 13:37:31.592: INFO: Pod exec-volume-test-preprovisionedpv-kl4n no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-kl4n
Sep 19 13:37:31.592: INFO: Deleting pod "exec-volume-test-preprovisionedpv-kl4n" in namespace "volume-6604"
... skipping 22 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":76,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:33.861: INFO: Only supported for providers [gce gke] (not aws)
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:8.340 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:35.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 19 13:37:35.961: INFO: Waiting up to 5m0s for pod "pod-866cae61-180e-4d0f-9067-e05aabd4d860" in namespace "emptydir-2233" to be "Succeeded or Failed"
Sep 19 13:37:36.077: INFO: Pod "pod-866cae61-180e-4d0f-9067-e05aabd4d860": Phase="Pending", Reason="", readiness=false. Elapsed: 115.846639ms
Sep 19 13:37:38.189: INFO: Pod "pod-866cae61-180e-4d0f-9067-e05aabd4d860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.228254761s
STEP: Saw pod success
Sep 19 13:37:38.190: INFO: Pod "pod-866cae61-180e-4d0f-9067-e05aabd4d860" satisfied condition "Succeeded or Failed"
Sep 19 13:37:38.299: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-866cae61-180e-4d0f-9067-e05aabd4d860 container test-container: <nil>
STEP: delete the pod
Sep 19 13:37:38.530: INFO: Waiting for pod pod-866cae61-180e-4d0f-9067-e05aabd4d860 to disappear
Sep 19 13:37:38.639: INFO: Pod pod-866cae61-180e-4d0f-9067-e05aabd4d860 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:38.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2233" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":8,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 68 lines ...
Sep 19 13:37:20.917: INFO: Waiting for pod aws-client to disappear
Sep 19 13:37:21.026: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 19 13:37:21.026: INFO: Deleting PersistentVolumeClaim "pvc-tq5dn"
Sep 19 13:37:21.139: INFO: Deleting PersistentVolume "aws-x5nw6"
Sep 19 13:37:21.884: INFO: Couldn't delete PD "aws://eu-central-1a/vol-05ebed6e053ad1dbb", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05ebed6e053ad1dbb is currently attached to i-078030ae5ed6d2d1a
	status code: 400, request id: f1a5f08c-9bf7-46ed-9bd9-a5ed64f73157
Sep 19 13:37:27.478: INFO: Couldn't delete PD "aws://eu-central-1a/vol-05ebed6e053ad1dbb", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05ebed6e053ad1dbb is currently attached to i-078030ae5ed6d2d1a
	status code: 400, request id: 46eb9619-8c1f-4ae1-b275-16b5f47cd670
Sep 19 13:37:33.066: INFO: Couldn't delete PD "aws://eu-central-1a/vol-05ebed6e053ad1dbb", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05ebed6e053ad1dbb is currently attached to i-078030ae5ed6d2d1a
	status code: 400, request id: 60067f3f-4ded-48b7-b819-2eb0bf40ae5e
Sep 19 13:37:38.649: INFO: Successfully deleted PD "aws://eu-central-1a/vol-05ebed6e053ad1dbb".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:38.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6229" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:38.883: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 219 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":4,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:34.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Sep 19 13:37:35.325: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-238c28f3-f668-4f5d-80e3-959f0e4fe6cb" in namespace "security-context-test-2884" to be "Succeeded or Failed"
Sep 19 13:37:35.433: INFO: Pod "alpine-nnp-nil-238c28f3-f668-4f5d-80e3-959f0e4fe6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 108.638325ms
Sep 19 13:37:37.542: INFO: Pod "alpine-nnp-nil-238c28f3-f668-4f5d-80e3-959f0e4fe6cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217732792s
Sep 19 13:37:39.654: INFO: Pod "alpine-nnp-nil-238c28f3-f668-4f5d-80e3-959f0e4fe6cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329195725s
Sep 19 13:37:39.654: INFO: Pod "alpine-nnp-nil-238c28f3-f668-4f5d-80e3-959f0e4fe6cb" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:39.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2884" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:40.002: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
STEP: Deleting pod verify-service-up-exec-pod-dkshc in namespace services-4193
STEP: verifying service-headless is not up
Sep 19 13:37:01.195: INFO: Creating new host exec pod
Sep 19 13:37:01.416: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:37:03.525: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:37:05.526: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Sep 19 13:37:05.526: INFO: Running '/tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4193 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.115.72:80 && echo service-down-failed'
Sep 19 13:37:08.676: INFO: rc: 28
Sep 19 13:37:08.676: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.115.72:80 && echo service-down-failed" in pod services-4193/verify-service-down-host-exec-pod: error running /tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4193 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.115.72:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.115.72:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4193
STEP: adding service.kubernetes.io/headless label
STEP: verifying service is not up
Sep 19 13:37:09.046: INFO: Creating new host exec pod
Sep 19 13:37:09.267: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:37:11.377: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:37:13.377: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:37:15.378: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Sep 19 13:37:15.378: INFO: Running '/tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4193 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.3.97:80 && echo service-down-failed'
Sep 19 13:37:18.556: INFO: rc: 28
Sep 19 13:37:18.557: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.66.3.97:80 && echo service-down-failed" in pod services-4193/verify-service-down-host-exec-pod: error running /tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4193 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.66.3.97:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.66.3.97:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4193
STEP: removing service.kubernetes.io/headless annotation
STEP: verifying service is up
Sep 19 13:37:18.913: INFO: Creating new host exec pod
... skipping 15 lines ...
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-4193
STEP: Deleting pod verify-service-up-exec-pod-drxjs in namespace services-4193
STEP: verifying service-headless is still not up
Sep 19 13:37:34.487: INFO: Creating new host exec pod
Sep 19 13:37:34.706: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:37:36.816: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Sep 19 13:37:36.816: INFO: Running '/tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4193 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.115.72:80 && echo service-down-failed'
Sep 19 13:37:39.977: INFO: rc: 28
Sep 19 13:37:39.977: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.71.115.72:80 && echo service-down-failed" in pod services-4193/verify-service-down-host-exec-pod: error running /tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4193 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.71.115.72:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.71.115.72:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4193
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:40.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:60.313 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1940
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:40.329: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Sep 19 13:37:17.769: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 19 13:37:17.769: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-sk8z
STEP: Creating a pod to test atomic-volume-subpath
Sep 19 13:37:17.928: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-sk8z" in namespace "provisioning-882" to be "Succeeded or Failed"
Sep 19 13:37:18.037: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Pending", Reason="", readiness=false. Elapsed: 109.122566ms
Sep 19 13:37:20.148: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220034437s
Sep 19 13:37:22.259: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33088311s
Sep 19 13:37:24.369: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44108438s
Sep 19 13:37:26.480: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Running", Reason="", readiness=true. Elapsed: 8.551841591s
Sep 19 13:37:28.590: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Running", Reason="", readiness=true. Elapsed: 10.662307852s
Sep 19 13:37:30.700: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Running", Reason="", readiness=true. Elapsed: 12.772173354s
Sep 19 13:37:32.811: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Running", Reason="", readiness=true. Elapsed: 14.882766674s
Sep 19 13:37:34.921: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Running", Reason="", readiness=true. Elapsed: 16.993381245s
Sep 19 13:37:37.031: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Running", Reason="", readiness=true. Elapsed: 19.103532445s
Sep 19 13:37:39.145: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Running", Reason="", readiness=true. Elapsed: 21.217397219s
Sep 19 13:37:41.255: INFO: Pod "pod-subpath-test-inlinevolume-sk8z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.327525911s
STEP: Saw pod success
Sep 19 13:37:41.255: INFO: Pod "pod-subpath-test-inlinevolume-sk8z" satisfied condition "Succeeded or Failed"
Sep 19 13:37:41.365: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-sk8z container test-container-subpath-inlinevolume-sk8z: <nil>
STEP: delete the pod
Sep 19 13:37:41.591: INFO: Waiting for pod pod-subpath-test-inlinevolume-sk8z to disappear
Sep 19 13:37:41.700: INFO: Pod pod-subpath-test-inlinevolume-sk8z no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-sk8z
Sep 19 13:37:41.700: INFO: Deleting pod "pod-subpath-test-inlinevolume-sk8z" in namespace "provisioning-882"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":77,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:38.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-f7013af1-1d85-451e-9fd8-6f64ef21bc66
STEP: Creating a pod to test consume configMaps
Sep 19 13:37:39.708: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08" in namespace "projected-2444" to be "Succeeded or Failed"
Sep 19 13:37:39.821: INFO: Pod "pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08": Phase="Pending", Reason="", readiness=false. Elapsed: 112.930906ms
Sep 19 13:37:41.932: INFO: Pod "pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224034054s
Sep 19 13:37:44.047: INFO: Pod "pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.338991659s
STEP: Saw pod success
Sep 19 13:37:44.047: INFO: Pod "pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08" satisfied condition "Succeeded or Failed"
Sep 19 13:37:44.167: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:37:44.406: INFO: Waiting for pod pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08 to disappear
Sep 19 13:37:44.515: INFO: Pod pod-projected-configmaps-56a85fa1-ce3c-4118-ac54-587048313a08 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.816 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:44.748: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":5,"skipped":29,"failed":0}
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:41.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename discovery
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:44.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-6169" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":6,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:44.769: INFO: Only supported for providers [gce gke] (not aws)
... skipping 54 lines ...
STEP: Destroying namespace "apply-4189" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":7,"skipped":35,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:46.775: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:37:41.881: INFO: Waiting up to 5m0s for pod "metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a" in namespace "projected-2861" to be "Succeeded or Failed"
Sep 19 13:37:41.990: INFO: Pod "metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a": Phase="Pending", Reason="", readiness=false. Elapsed: 108.883091ms
Sep 19 13:37:44.100: INFO: Pod "metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a": Phase="Running", Reason="", readiness=true. Elapsed: 2.219331005s
Sep 19 13:37:46.210: INFO: Pod "metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329433685s
STEP: Saw pod success
Sep 19 13:37:46.211: INFO: Pod "metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a" satisfied condition "Succeeded or Failed"
Sep 19 13:37:46.322: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a container client-container: <nil>
STEP: delete the pod
Sep 19 13:37:46.562: INFO: Waiting for pod metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a to disappear
Sep 19 13:37:46.675: INFO: Pod metadata-volume-a32e4eaa-cb5c-4b36-9e87-e282030ae37a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.682 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":40,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:46.927: INFO: Driver local doesn't support ext4 -- skipping
... skipping 160 lines ...
Sep 19 13:37:37.382: INFO: PersistentVolumeClaim pvc-z24lk found but phase is Pending instead of Bound.
Sep 19 13:37:39.493: INFO: PersistentVolumeClaim pvc-z24lk found and phase=Bound (4.333081941s)
Sep 19 13:37:39.493: INFO: Waiting up to 3m0s for PersistentVolume local-q8pvx to have phase Bound
Sep 19 13:37:39.604: INFO: PersistentVolume local-q8pvx found and phase=Bound (110.185437ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zwzm
STEP: Creating a pod to test subpath
Sep 19 13:37:39.953: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zwzm" in namespace "provisioning-3714" to be "Succeeded or Failed"
Sep 19 13:37:40.063: INFO: Pod "pod-subpath-test-preprovisionedpv-zwzm": Phase="Pending", Reason="", readiness=false. Elapsed: 110.427959ms
Sep 19 13:37:42.175: INFO: Pod "pod-subpath-test-preprovisionedpv-zwzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222162969s
Sep 19 13:37:44.290: INFO: Pod "pod-subpath-test-preprovisionedpv-zwzm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337548112s
Sep 19 13:37:46.405: INFO: Pod "pod-subpath-test-preprovisionedpv-zwzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.452626552s
STEP: Saw pod success
Sep 19 13:37:46.406: INFO: Pod "pod-subpath-test-preprovisionedpv-zwzm" satisfied condition "Succeeded or Failed"
Sep 19 13:37:46.515: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-zwzm container test-container-volume-preprovisionedpv-zwzm: <nil>
STEP: delete the pod
Sep 19 13:37:46.754: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zwzm to disappear
Sep 19 13:37:46.864: INFO: Pod pod-subpath-test-preprovisionedpv-zwzm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zwzm
Sep 19 13:37:46.864: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zwzm" in namespace "provisioning-3714"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":7,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 52 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":8,"skipped":42,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:48.457: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 19 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":5,"skipped":20,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:04.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:43.561 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete pods when suspended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:111
------------------------------
{"msg":"PASSED [sig-apps] Job should delete pods when suspended","total":-1,"completed":6,"skipped":20,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:48.609: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 133 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Sep 19 13:37:42.818: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-4445" to be "Succeeded or Failed"
Sep 19 13:37:42.927: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 108.896069ms
Sep 19 13:37:45.039: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220359538s
Sep 19 13:37:47.149: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330365937s
Sep 19 13:37:49.261: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442225877s
Sep 19 13:37:51.370: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552143149s
Sep 19 13:37:53.483: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.664847203s
Sep 19 13:37:53.483: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:53.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4445" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":8,"skipped":78,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:53.843: INFO: Only supported for providers [gce gke] (not aws)
... skipping 47 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-1206/secret-test-b60e64fd-edb2-435a-8c9b-724427f0e3e7
STEP: Creating a pod to test consume secrets
Sep 19 13:37:49.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1" in namespace "secrets-1206" to be "Succeeded or Failed"
Sep 19 13:37:49.621: INFO: Pod "pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 110.027943ms
Sep 19 13:37:51.731: INFO: Pod "pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220928231s
Sep 19 13:37:53.843: INFO: Pod "pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331990468s
STEP: Saw pod success
Sep 19 13:37:53.843: INFO: Pod "pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1" satisfied condition "Succeeded or Failed"
Sep 19 13:37:53.953: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1 container env-test: <nil>
STEP: delete the pod
Sep 19 13:37:54.180: INFO: Waiting for pod pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1 to disappear
Sep 19 13:37:54.291: INFO: Pod pod-configmaps-c7f8331d-2f57-415d-b955-6c7a0d85c1c1 no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:37:54.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3978" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:54.895: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
Sep 19 13:37:30.999: INFO: PersistentVolumeClaim pvc-lgxwt found and phase=Bound (109.157785ms)
Sep 19 13:37:30.999: INFO: Waiting up to 3m0s for PersistentVolume nfs-85ctp to have phase Bound
Sep 19 13:37:31.110: INFO: PersistentVolume nfs-85ctp found and phase=Bound (110.711895ms)
STEP: Checking pod has write access to PersistentVolume
Sep 19 13:37:31.327: INFO: Creating nfs test pod
Sep 19 13:37:31.438: INFO: Pod should terminate with exitcode 0 (success)
Sep 19 13:37:31.438: INFO: Waiting up to 5m0s for pod "pvc-tester-qq679" in namespace "pv-6421" to be "Succeeded or Failed"
Sep 19 13:37:31.547: INFO: Pod "pvc-tester-qq679": Phase="Pending", Reason="", readiness=false. Elapsed: 108.757755ms
Sep 19 13:37:33.661: INFO: Pod "pvc-tester-qq679": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222382097s
Sep 19 13:37:35.771: INFO: Pod "pvc-tester-qq679": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332353063s
Sep 19 13:37:37.880: INFO: Pod "pvc-tester-qq679": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441757749s
STEP: Saw pod success
Sep 19 13:37:37.880: INFO: Pod "pvc-tester-qq679" satisfied condition "Succeeded or Failed"
Sep 19 13:37:37.880: INFO: Pod pvc-tester-qq679 succeeded 
Sep 19 13:37:37.880: INFO: Deleting pod "pvc-tester-qq679" in namespace "pv-6421"
Sep 19 13:37:37.993: INFO: Wait up to 5m0s for pod "pvc-tester-qq679" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 19 13:37:38.103: INFO: Deleting PVC pvc-lgxwt to trigger reclamation of PV nfs-85ctp
Sep 19 13:37:38.103: INFO: Deleting PersistentVolumeClaim "pvc-lgxwt"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":5,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:55.255: INFO: Only supported for providers [openstack] (not aws)
... skipping 180 lines ...
• [SLOW TEST:11.687 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":10,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:8.387 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":3,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:57.584: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":7,"skipped":59,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:37:58.465: INFO: Driver local doesn't support ext3 -- skipping
... skipping 183 lines ...
Sep 19 13:37:51.480: INFO: PersistentVolumeClaim pvc-hbg4p found but phase is Pending instead of Bound.
Sep 19 13:37:53.590: INFO: PersistentVolumeClaim pvc-hbg4p found and phase=Bound (10.673372113s)
Sep 19 13:37:53.590: INFO: Waiting up to 3m0s for PersistentVolume local-5xj4j to have phase Bound
Sep 19 13:37:53.700: INFO: PersistentVolume local-5xj4j found and phase=Bound (109.743706ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gvrc
STEP: Creating a pod to test subpath
Sep 19 13:37:54.032: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gvrc" in namespace "provisioning-7538" to be "Succeeded or Failed"
Sep 19 13:37:54.141: INFO: Pod "pod-subpath-test-preprovisionedpv-gvrc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.589374ms
Sep 19 13:37:56.252: INFO: Pod "pod-subpath-test-preprovisionedpv-gvrc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219998374s
Sep 19 13:37:58.362: INFO: Pod "pod-subpath-test-preprovisionedpv-gvrc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329868225s
STEP: Saw pod success
Sep 19 13:37:58.362: INFO: Pod "pod-subpath-test-preprovisionedpv-gvrc" satisfied condition "Succeeded or Failed"
Sep 19 13:37:58.471: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-gvrc container test-container-volume-preprovisionedpv-gvrc: <nil>
STEP: delete the pod
Sep 19 13:37:58.699: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gvrc to disappear
Sep 19 13:37:58.812: INFO: Pod pod-subpath-test-preprovisionedpv-gvrc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gvrc
Sep 19 13:37:58.813: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gvrc" in namespace "provisioning-7538"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":15,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:37:49.179: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182" in namespace "projected-4330" to be "Succeeded or Failed"
Sep 19 13:37:49.289: INFO: Pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182": Phase="Pending", Reason="", readiness=false. Elapsed: 110.037889ms
Sep 19 13:37:51.399: INFO: Pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220064242s
Sep 19 13:37:53.510: INFO: Pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331127499s
Sep 19 13:37:55.623: INFO: Pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444088178s
Sep 19 13:37:57.734: INFO: Pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555125058s
Sep 19 13:37:59.846: INFO: Pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.667180181s
STEP: Saw pod success
Sep 19 13:37:59.847: INFO: Pod "downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182" satisfied condition "Succeeded or Failed"
Sep 19 13:37:59.956: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182 container client-container: <nil>
STEP: delete the pod
Sep 19 13:38:00.184: INFO: Waiting for pod downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182 to disappear
Sep 19 13:38:00.297: INFO: Pod downwardapi-volume-71e20362-e9a0-470e-b4ef-9ec40e8c1182 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.030 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":108,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 50 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:37:55.590: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3" in namespace "downward-api-7274" to be "Succeeded or Failed"
Sep 19 13:37:55.702: INFO: Pod "downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3": Phase="Pending", Reason="", readiness=false. Elapsed: 112.259553ms
Sep 19 13:37:57.812: INFO: Pod "downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222863692s
Sep 19 13:37:59.926: INFO: Pod "downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336787792s
Sep 19 13:38:02.072: INFO: Pod "downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.482210481s
STEP: Saw pod success
Sep 19 13:38:02.072: INFO: Pod "downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3" satisfied condition "Succeeded or Failed"
Sep 19 13:38:02.191: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3 container client-container: <nil>
STEP: delete the pod
Sep 19 13:38:02.432: INFO: Waiting for pod downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3 to disappear
Sep 19 13:38:02.542: INFO: Pod downwardapi-volume-7bc37b3d-4a65-4939-a994-f9075557ffa3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 22 lines ...
Sep 19 13:36:37.019: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:sctp-7824 PodName:hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 19 13:36:37.019: INFO: >>> kubeConfig: /root/.kube/config
Sep 19 13:36:37.790: INFO: exec ip-172-20-50-204.eu-central-1.compute.internal: command:   lsmod | grep sctp
Sep 19 13:36:37.791: INFO: exec ip-172-20-50-204.eu-central-1.compute.internal: stdout:    ""
Sep 19 13:36:37.791: INFO: exec ip-172-20-50-204.eu-central-1.compute.internal: stderr:    ""
Sep 19 13:36:37.791: INFO: exec ip-172-20-50-204.eu-central-1.compute.internal: exit code: 0
Sep 19 13:36:37.791: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Sep 19 13:36:37.791: INFO: the sctp module is not loaded on node: ip-172-20-50-204.eu-central-1.compute.internal
STEP: Deleting pod hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4 in namespace sctp-7824
STEP: creating a pod with hostport on the selected node
STEP: Launching the pod on node ip-172-20-50-204.eu-central-1.compute.internal
Sep 19 13:36:38.124: INFO: The status of Pod hostport is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:36:40.233: INFO: The status of Pod hostport is Pending, waiting for it to be Running (with Ready = true)
... skipping 127 lines ...
Sep 19 13:37:55.506: INFO: >>> kubeConfig: /root/.kube/config
Sep 19 13:37:56.274: INFO: retrying ... not hostport sctp iptables rules found on node ip-172-20-50-204.eu-central-1.compute.internal
Sep 19 13:37:56.274: INFO: Executing cmd "iptables-save" on node ip-172-20-50-204.eu-central-1.compute.internal
Sep 19 13:37:56.274: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c iptables-save] Namespace:sctp-7824 PodName:hostexec-ip-172-20-50-204.eu-central-1.compute.internal-wl6gt ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 19 13:37:56.274: INFO: >>> kubeConfig: /root/.kube/config
Sep 19 13:37:57.098: INFO: retrying ... not hostport sctp iptables rules found on node ip-172-20-50-204.eu-central-1.compute.internal
Sep 19 13:37:57.098: FAIL: iptables rules are not set for a pod with sctp hostport

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000086fb0)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:128 +0x697
k8s.io/kubernetes/test/e2e.TestE2E(0x22146b9)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19
... skipping 9 lines ...
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:32 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4: {default-scheduler } Scheduled: Successfully assigned sctp-7824/hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4 to ip-172-20-50-204.eu-central-1.compute.internal
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:33 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:33 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Started: Started container agnhost-container
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:33 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Created: Created container agnhost-container
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:37 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Killing: Stopping container agnhost-container
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:37 +0000 UTC - event for hostport: {default-scheduler } Scheduled: Successfully assigned sctp-7824/hostport to ip-172-20-50-204.eu-central-1.compute.internal
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:39 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-429s4: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} FailedKillPod: error killing pod: failed to "KillContainer" for "agnhost-container" with KillContainerError: "rpc error: code = NotFound desc = an error occurred when try to find container \"4b2e2dd0adb91507b0ed527466712e6da8e4b69cb7d4cf9d71b4dff251e16fc6\": not found"
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:39 +0000 UTC - event for hostport: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Created: Created container agnhost-container
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:39 +0000 UTC - event for hostport: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.33" already present on machine
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:39 +0000 UTC - event for hostport: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Started: Started container agnhost-container
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:50 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-wl6gt: {default-scheduler } Scheduled: Successfully assigned sctp-7824/hostexec-ip-172-20-50-204.eu-central-1.compute.internal-wl6gt to ip-172-20-50-204.eu-central-1.compute.internal
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:51 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-wl6gt: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Created: Created container agnhost-container
Sep 19 13:37:57.443: INFO: At 2021-09-19 13:36:51 +0000 UTC - event for hostexec-ip-172-20-50-204.eu-central-1.compute.internal-wl6gt: {kubelet ip-172-20-50-204.eu-central-1.compute.internal} Started: Started container agnhost-container
... skipping 212 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3228

  Sep 19 13:37:57.098: iptables rules are not set for a pod with sctp hostport

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort","total":-1,"completed":1,"skipped":25,"failed":1,"failures":["[sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep 19 13:37:31.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
Sep 19 13:37:32.007: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 19 13:37:32.250: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3161" in namespace "provisioning-3161" to be "Succeeded or Failed"
Sep 19 13:37:32.359: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 109.723347ms
Sep 19 13:37:34.470: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219818208s
Sep 19 13:37:36.580: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330053249s
Sep 19 13:37:38.691: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441558895s
Sep 19 13:37:40.809: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559332743s
Sep 19 13:37:42.920: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 10.670008743s
Sep 19 13:37:45.034: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.784217286s
STEP: Saw pod success
Sep 19 13:37:45.034: INFO: Pod "hostpath-symlink-prep-provisioning-3161" satisfied condition "Succeeded or Failed"
Sep 19 13:37:45.034: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3161" in namespace "provisioning-3161"
Sep 19 13:37:45.189: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3161" to be fully deleted
Sep 19 13:37:45.302: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-6nhm
Sep 19 13:37:53.649: INFO: Running '/tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-3161 exec pod-subpath-test-inlinevolume-6nhm --container test-container-volume-inlinevolume-6nhm -- /bin/sh -c rm -r /test-volume/provisioning-3161'
Sep 19 13:37:54.809: INFO: stderr: ""
Sep 19 13:37:54.809: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-6nhm
Sep 19 13:37:54.809: INFO: Deleting pod "pod-subpath-test-inlinevolume-6nhm" in namespace "provisioning-3161"
Sep 19 13:37:54.920: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-6nhm" to be fully deleted
STEP: Deleting pod
Sep 19 13:37:59.146: INFO: Deleting pod "pod-subpath-test-inlinevolume-6nhm" in namespace "provisioning-3161"
Sep 19 13:37:59.368: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3161" in namespace "provisioning-3161" to be "Succeeded or Failed"
Sep 19 13:37:59.478: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 110.543253ms
Sep 19 13:38:01.619: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250822626s
Sep 19 13:38:03.729: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360846355s
Sep 19 13:38:05.852: INFO: Pod "hostpath-symlink-prep-provisioning-3161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.483827351s
STEP: Saw pod success
Sep 19 13:38:05.852: INFO: Pod "hostpath-symlink-prep-provisioning-3161" satisfied condition "Succeeded or Failed"
Sep 19 13:38:05.852: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3161" in namespace "provisioning-3161"
Sep 19 13:38:06.006: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3161" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:06.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3161" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":5,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 197 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:08.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8748" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":11,"skipped":77,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:08.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
Sep 19 13:38:09.599: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.763 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 12 lines ...
Sep 19 13:36:57.160: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:36:59.159: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true)
Sep 19 13:36:59.270: INFO: Running '/tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6362 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode'
Sep 19 13:37:00.398: INFO: rc: 7
Sep 19 13:37:00.517: INFO: Waiting for pod kube-proxy-mode-detector to disappear
Sep 19 13:37:00.627: INFO: Pod kube-proxy-mode-detector no longer exists
Sep 19 13:37:00.627: INFO: Couldn't detect KubeProxy mode - test failure may be expected: error running /tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6362 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode
command terminated with exit code 7

error:
exit status 7
STEP: creating service affinity-nodeport-timeout in namespace services-6362
STEP: creating replication controller affinity-nodeport-timeout in namespace services-6362
I0919 13:37:00.856302    4802 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6362, replica count: 3
I0919 13:37:04.008397    4802 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0919 13:37:07.008691    4802 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
... skipping 54 lines ...
• [SLOW TEST:75.828 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:10.065: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 37 lines ...
• [SLOW TEST:52.053 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:11.083: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 31 lines ...
STEP: Destroying namespace "apply-5523" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":8,"skipped":81,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:18.311 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":2,"skipped":30,"failed":1,"failures":["[sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":8,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":9,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:24.714: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":3,"skipped":17,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":12,"skipped":78,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:23.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 19 13:38:24.522: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-5548b95c-3e8e-4097-83e4-2a81103dcaf3" in namespace "security-context-test-7376" to be "Succeeded or Failed"
Sep 19 13:38:24.630: INFO: Pod "busybox-privileged-false-5548b95c-3e8e-4097-83e4-2a81103dcaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 108.017298ms
Sep 19 13:38:26.739: INFO: Pod "busybox-privileged-false-5548b95c-3e8e-4097-83e4-2a81103dcaf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216534275s
Sep 19 13:38:28.848: INFO: Pod "busybox-privileged-false-5548b95c-3e8e-4097-83e4-2a81103dcaf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.32579142s
Sep 19 13:38:28.848: INFO: Pod "busybox-privileged-false-5548b95c-3e8e-4097-83e4-2a81103dcaf3" satisfied condition "Succeeded or Failed"
Sep 19 13:38:28.974: INFO: Got logs for pod "busybox-privileged-false-5548b95c-3e8e-4097-83e4-2a81103dcaf3": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:28.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7376" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with privileged
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":78,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:29.229: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
Sep 19 13:38:21.019: INFO: PersistentVolumeClaim pvc-6m8fn found but phase is Pending instead of Bound.
Sep 19 13:38:23.129: INFO: PersistentVolumeClaim pvc-6m8fn found and phase=Bound (12.788448802s)
Sep 19 13:38:23.129: INFO: Waiting up to 3m0s for PersistentVolume local-9gbs7 to have phase Bound
Sep 19 13:38:23.239: INFO: PersistentVolume local-9gbs7 found and phase=Bound (109.763581ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rfwg
STEP: Creating a pod to test subpath
Sep 19 13:38:23.571: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rfwg" in namespace "provisioning-4575" to be "Succeeded or Failed"
Sep 19 13:38:23.681: INFO: Pod "pod-subpath-test-preprovisionedpv-rfwg": Phase="Pending", Reason="", readiness=false. Elapsed: 109.887132ms
Sep 19 13:38:25.791: INFO: Pod "pod-subpath-test-preprovisionedpv-rfwg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220472008s
Sep 19 13:38:27.902: INFO: Pod "pod-subpath-test-preprovisionedpv-rfwg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331348375s
STEP: Saw pod success
Sep 19 13:38:27.902: INFO: Pod "pod-subpath-test-preprovisionedpv-rfwg" satisfied condition "Succeeded or Failed"
Sep 19 13:38:28.012: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-rfwg container test-container-subpath-preprovisionedpv-rfwg: <nil>
STEP: delete the pod
Sep 19 13:38:28.258: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rfwg to disappear
Sep 19 13:38:28.368: INFO: Pod pod-subpath-test-preprovisionedpv-rfwg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rfwg
Sep 19 13:38:28.368: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rfwg" in namespace "provisioning-4575"
... skipping 60 lines ...
• [SLOW TEST:30.447 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:30.860: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 50 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 19 13:38:30.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2021, time.September, 19, 13, 38, 30, 0, time.Local), LastTransitionTime:time.Date(2021, time.September, 19, 13, 38, 30, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2021, time.September, 19, 13, 38, 30, 0, time.Local), LastTransitionTime:time.Date(2021, time.September, 19, 13, 38, 30, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-8f89dbb55\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 19 13:38:34.220: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:35.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9632" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:6.548 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":14,"skipped":85,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:35.809: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:239

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:35:24.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 222 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should provide basic identity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:128
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":2,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 50 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:36.545: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 173 lines ...
Sep 19 13:38:07.033: INFO: PersistentVolumeClaim pvc-8w89f found but phase is Pending instead of Bound.
Sep 19 13:38:09.142: INFO: PersistentVolumeClaim pvc-8w89f found and phase=Bound (4.329048664s)
Sep 19 13:38:09.142: INFO: Waiting up to 3m0s for PersistentVolume local-fthjm to have phase Bound
Sep 19 13:38:09.251: INFO: PersistentVolume local-fthjm found and phase=Bound (108.858873ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tchr
STEP: Creating a pod to test subpath
Sep 19 13:38:09.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tchr" in namespace "provisioning-1826" to be "Succeeded or Failed"
Sep 19 13:38:09.695: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 109.646169ms
Sep 19 13:38:11.806: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221393329s
Sep 19 13:38:13.916: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33150818s
Sep 19 13:38:16.027: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442466613s
Sep 19 13:38:18.142: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556723863s
Sep 19 13:38:20.253: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668295739s
Sep 19 13:38:22.363: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778053595s
Sep 19 13:38:24.474: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.888777066s
Sep 19 13:38:26.584: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.999166668s
STEP: Saw pod success
Sep 19 13:38:26.584: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr" satisfied condition "Succeeded or Failed"
Sep 19 13:38:26.694: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tchr container test-container-subpath-preprovisionedpv-tchr: <nil>
STEP: delete the pod
Sep 19 13:38:26.933: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tchr to disappear
Sep 19 13:38:27.042: INFO: Pod pod-subpath-test-preprovisionedpv-tchr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tchr
Sep 19 13:38:27.042: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tchr" in namespace "provisioning-1826"
STEP: Creating pod pod-subpath-test-preprovisionedpv-tchr
STEP: Creating a pod to test subpath
Sep 19 13:38:27.265: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tchr" in namespace "provisioning-1826" to be "Succeeded or Failed"
Sep 19 13:38:27.375: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 109.137947ms
Sep 19 13:38:29.488: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222801753s
Sep 19 13:38:31.600: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334285729s
Sep 19 13:38:33.723: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.457527509s
Sep 19 13:38:35.834: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.56858945s
STEP: Saw pod success
Sep 19 13:38:35.834: INFO: Pod "pod-subpath-test-preprovisionedpv-tchr" satisfied condition "Succeeded or Failed"
Sep 19 13:38:35.954: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-tchr container test-container-subpath-preprovisionedpv-tchr: <nil>
STEP: delete the pod
Sep 19 13:38:36.181: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tchr to disappear
Sep 19 13:38:36.290: INFO: Pod pod-subpath-test-preprovisionedpv-tchr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tchr
Sep 19 13:38:36.290: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tchr" in namespace "provisioning-1826"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":10,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:37.846: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":63,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:37:54.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
• [SLOW TEST:43.477 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:582
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":8,"skipped":63,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:35.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-57cff83b-6dc0-4a30-bca5-401a83702c9e
STEP: Creating a pod to test consume configMaps
Sep 19 13:38:36.594: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fd9d167-efa7-4bb4-9ce2-153ae020ca6b" in namespace "projected-2506" to be "Succeeded or Failed"
Sep 19 13:38:36.702: INFO: Pod "pod-projected-configmaps-7fd9d167-efa7-4bb4-9ce2-153ae020ca6b": Phase="Pending", Reason="", readiness=false. Elapsed: 108.259744ms
Sep 19 13:38:38.814: INFO: Pod "pod-projected-configmaps-7fd9d167-efa7-4bb4-9ce2-153ae020ca6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22029842s
STEP: Saw pod success
Sep 19 13:38:38.814: INFO: Pod "pod-projected-configmaps-7fd9d167-efa7-4bb4-9ce2-153ae020ca6b" satisfied condition "Succeeded or Failed"
Sep 19 13:38:38.923: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-projected-configmaps-7fd9d167-efa7-4bb4-9ce2-153ae020ca6b container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:38:39.150: INFO: Waiting for pod pod-projected-configmaps-7fd9d167-efa7-4bb4-9ce2-153ae020ca6b to disappear
Sep 19 13:38:39.259: INFO: Pod pod-projected-configmaps-7fd9d167-efa7-4bb4-9ce2-153ae020ca6b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:39.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2506" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:39.496: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 86 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-22b9cf77-50c9-4c7f-8228-b0826e47bd8d
STEP: Creating a pod to test consume configMaps
Sep 19 13:38:36.639: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf19c7f0-901d-45f4-9f45-16656ae4a151" in namespace "projected-1630" to be "Succeeded or Failed"
Sep 19 13:38:36.750: INFO: Pod "pod-projected-configmaps-bf19c7f0-901d-45f4-9f45-16656ae4a151": Phase="Pending", Reason="", readiness=false. Elapsed: 110.919276ms
Sep 19 13:38:38.862: INFO: Pod "pod-projected-configmaps-bf19c7f0-901d-45f4-9f45-16656ae4a151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222530836s
STEP: Saw pod success
Sep 19 13:38:38.862: INFO: Pod "pod-projected-configmaps-bf19c7f0-901d-45f4-9f45-16656ae4a151" satisfied condition "Succeeded or Failed"
Sep 19 13:38:38.973: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-projected-configmaps-bf19c7f0-901d-45f4-9f45-16656ae4a151 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep 19 13:38:39.208: INFO: Waiting for pod pod-projected-configmaps-bf19c7f0-901d-45f4-9f45-16656ae4a151 to disappear
Sep 19 13:38:39.318: INFO: Pod pod-projected-configmaps-bf19c7f0-901d-45f4-9f45-16656ae4a151 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:39.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1630" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":6,"skipped":29,"failed":0}
[BeforeEach] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:29.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 65 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Sep 19 13:38:40.298: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-781d9a4c-2bf9-4b10-9bd4-3af6359ab7b9" in namespace "security-context-test-2806" to be "Succeeded or Failed"
Sep 19 13:38:40.427: INFO: Pod "alpine-nnp-true-781d9a4c-2bf9-4b10-9bd4-3af6359ab7b9": Phase="Pending", Reason="", readiness=false. Elapsed: 128.254535ms
Sep 19 13:38:42.538: INFO: Pod "alpine-nnp-true-781d9a4c-2bf9-4b10-9bd4-3af6359ab7b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239938542s
Sep 19 13:38:44.657: INFO: Pod "alpine-nnp-true-781d9a4c-2bf9-4b10-9bd4-3af6359ab7b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.358807547s
Sep 19 13:38:44.657: INFO: Pod "alpine-nnp-true-781d9a4c-2bf9-4b10-9bd4-3af6359ab7b9" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:44.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2806" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":23,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:45.053: INFO: Only supported for providers [vsphere] (not aws)
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":9,"skipped":87,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:45.633: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76 lines ...
• [SLOW TEST:8.108 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":11,"skipped":56,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":32,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:505
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":4,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:48.075: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 154 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:48.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Sep 19 13:38:48.840: INFO: found topology map[topology.kubernetes.io/zone:eu-central-1a]
Sep 19 13:38:48.841: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 19 13:38:48.841: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:49.250: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
STEP: Destroying namespace "services-8823" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep 19 13:38:10.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Sep 19 13:38:10.626: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 19 13:38:10.849: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2182" in namespace "provisioning-2182" to be "Succeeded or Failed"
Sep 19 13:38:10.962: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Pending", Reason="", readiness=false. Elapsed: 112.505041ms
Sep 19 13:38:13.072: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222238066s
Sep 19 13:38:15.183: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333106219s
Sep 19 13:38:17.293: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443336409s
Sep 19 13:38:19.402: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55298223s
Sep 19 13:38:21.512: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.662819177s
STEP: Saw pod success
Sep 19 13:38:21.512: INFO: Pod "hostpath-symlink-prep-provisioning-2182" satisfied condition "Succeeded or Failed"
Sep 19 13:38:21.512: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2182" in namespace "provisioning-2182"
Sep 19 13:38:21.625: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2182" to be fully deleted
Sep 19 13:38:21.735: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-hdkb
STEP: Creating a pod to test atomic-volume-subpath
Sep 19 13:38:21.845: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hdkb" in namespace "provisioning-2182" to be "Succeeded or Failed"
Sep 19 13:38:21.955: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Pending", Reason="", readiness=false. Elapsed: 109.474332ms
Sep 19 13:38:24.065: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219395949s
Sep 19 13:38:26.176: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330892482s
Sep 19 13:38:28.288: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Running", Reason="", readiness=true. Elapsed: 6.442249164s
Sep 19 13:38:30.398: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Running", Reason="", readiness=true. Elapsed: 8.552375383s
Sep 19 13:38:32.510: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Running", Reason="", readiness=true. Elapsed: 10.66486136s
... skipping 2 lines ...
Sep 19 13:38:38.847: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Running", Reason="", readiness=true. Elapsed: 17.001969638s
Sep 19 13:38:40.958: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Running", Reason="", readiness=true. Elapsed: 19.112850099s
Sep 19 13:38:43.070: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Running", Reason="", readiness=true. Elapsed: 21.22411497s
Sep 19 13:38:45.180: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Running", Reason="", readiness=true. Elapsed: 23.334474754s
Sep 19 13:38:47.291: INFO: Pod "pod-subpath-test-inlinevolume-hdkb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.445395454s
STEP: Saw pod success
Sep 19 13:38:47.291: INFO: Pod "pod-subpath-test-inlinevolume-hdkb" satisfied condition "Succeeded or Failed"
Sep 19 13:38:47.403: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-hdkb container test-container-subpath-inlinevolume-hdkb: <nil>
STEP: delete the pod
Sep 19 13:38:47.638: INFO: Waiting for pod pod-subpath-test-inlinevolume-hdkb to disappear
Sep 19 13:38:47.747: INFO: Pod pod-subpath-test-inlinevolume-hdkb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-hdkb
Sep 19 13:38:47.747: INFO: Deleting pod "pod-subpath-test-inlinevolume-hdkb" in namespace "provisioning-2182"
STEP: Deleting pod
Sep 19 13:38:47.856: INFO: Deleting pod "pod-subpath-test-inlinevolume-hdkb" in namespace "provisioning-2182"
Sep 19 13:38:48.079: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2182" in namespace "provisioning-2182" to be "Succeeded or Failed"
Sep 19 13:38:48.194: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Pending", Reason="", readiness=false. Elapsed: 114.421562ms
Sep 19 13:38:50.306: INFO: Pod "hostpath-symlink-prep-provisioning-2182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.226529775s
STEP: Saw pod success
Sep 19 13:38:50.306: INFO: Pod "hostpath-symlink-prep-provisioning-2182" satisfied condition "Succeeded or Failed"
Sep 19 13:38:50.306: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2182" in namespace "provisioning-2182"
Sep 19 13:38:50.421: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2182" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:50.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2182" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:50.794: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 5 lines ...
Sep 19 13:38:47.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 19 13:38:48.230: INFO: Waiting up to 5m0s for pod "downward-api-392e24d4-368d-4cfa-980b-f2782a6dd738" in namespace "downward-api-3389" to be "Succeeded or Failed"
Sep 19 13:38:48.342: INFO: Pod "downward-api-392e24d4-368d-4cfa-980b-f2782a6dd738": Phase="Pending", Reason="", readiness=false. Elapsed: 111.644075ms
Sep 19 13:38:50.453: INFO: Pod "downward-api-392e24d4-368d-4cfa-980b-f2782a6dd738": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222472656s
STEP: Saw pod success
Sep 19 13:38:50.453: INFO: Pod "downward-api-392e24d4-368d-4cfa-980b-f2782a6dd738" satisfied condition "Succeeded or Failed"
Sep 19 13:38:50.566: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod downward-api-392e24d4-368d-4cfa-980b-f2782a6dd738 container dapi-container: <nil>
STEP: delete the pod
Sep 19 13:38:50.801: INFO: Waiting for pod downward-api-392e24d4-368d-4cfa-980b-f2782a6dd738 to disappear
Sep 19 13:38:50.910: INFO: Pod downward-api-392e24d4-368d-4cfa-980b-f2782a6dd738 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:50.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3389" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:51.156: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Sep 19 13:38:50.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Sep 19 13:38:50.838: INFO: Waiting up to 5m0s for pod "security-context-d6bb9593-8a47-417f-92d0-f4d0e57a85c6" in namespace "security-context-8594" to be "Succeeded or Failed"
Sep 19 13:38:50.949: INFO: Pod "security-context-d6bb9593-8a47-417f-92d0-f4d0e57a85c6": Phase="Pending", Reason="", readiness=false. Elapsed: 111.351108ms
Sep 19 13:38:53.075: INFO: Pod "security-context-d6bb9593-8a47-417f-92d0-f4d0e57a85c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.237132654s
STEP: Saw pod success
Sep 19 13:38:53.075: INFO: Pod "security-context-d6bb9593-8a47-417f-92d0-f4d0e57a85c6" satisfied condition "Succeeded or Failed"
Sep 19 13:38:53.184: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod security-context-d6bb9593-8a47-417f-92d0-f4d0e57a85c6 container test-container: <nil>
STEP: delete the pod
Sep 19 13:38:53.432: INFO: Waiting for pod security-context-d6bb9593-8a47-417f-92d0-f4d0e57a85c6 to disappear
Sep 19 13:38:53.540: INFO: Pod security-context-d6bb9593-8a47-417f-92d0-f4d0e57a85c6 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:53.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-8594" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":7,"skipped":87,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:53.802: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":7,"skipped":45,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:38:54.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-950" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":8,"skipped":51,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:38:45.751: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c" in namespace "downward-api-3042" to be "Succeeded or Failed"
Sep 19 13:38:45.861: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.601597ms
Sep 19 13:38:47.972: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221468335s
Sep 19 13:38:50.086: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335315211s
Sep 19 13:38:52.197: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446672997s
Sep 19 13:38:54.308: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.557667544s
Sep 19 13:38:56.420: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.669775291s
Sep 19 13:38:58.532: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.781046687s
STEP: Saw pod success
Sep 19 13:38:58.532: INFO: Pod "downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c" satisfied condition "Succeeded or Failed"
Sep 19 13:38:58.643: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c container client-container: <nil>
STEP: delete the pod
Sep 19 13:38:58.891: INFO: Waiting for pod downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c to disappear
Sep 19 13:38:59.002: INFO: Pod downwardapi-volume-8f903aa7-6d69-4987-864b-e78e386dac2c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.148 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:59.237: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
• [SLOW TEST:5.757 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":8,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:38:59.584: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 115 lines ...
Sep 19 13:38:54.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 19 13:38:54.712: INFO: Waiting up to 5m0s for pod "client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3" in namespace "containers-6137" to be "Succeeded or Failed"
Sep 19 13:38:54.825: INFO: Pod "client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3": Phase="Pending", Reason="", readiness=false. Elapsed: 113.61251ms
Sep 19 13:38:56.937: INFO: Pod "client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224957406s
Sep 19 13:38:59.051: INFO: Pod "client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.338917947s
STEP: Saw pod success
Sep 19 13:38:59.051: INFO: Pod "client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3" satisfied condition "Succeeded or Failed"
Sep 19 13:38:59.160: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:38:59.388: INFO: Waiting for pod client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3 to disappear
Sep 19 13:38:59.498: INFO: Pod client-containers-0bd76603-b5cf-48a3-9c00-1c30a0d003e3 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:00.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-208" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:00.602: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:7.325 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":9,"skipped":60,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:02.061: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 237 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":11,"skipped":44,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Sep 19 13:39:02.830: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4810" to be "Succeeded or Failed"
Sep 19 13:39:02.942: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 112.744398ms
Sep 19 13:39:05.053: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222943452s
STEP: Saw pod success
Sep 19 13:39:05.053: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 19 13:39:05.163: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Sep 19 13:39:05.389: INFO: Waiting for pod pod-host-path-test to disappear
Sep 19 13:39:05.498: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:05.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4810" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":10,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 95 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":10,"skipped":100,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:59.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 25 lines ...
• [SLOW TEST:12.180 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":11,"skipped":100,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:49.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Sep 19 13:38:49.844: INFO: Waiting up to 5m0s for pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" in namespace "svcaccounts-6200" to be "Succeeded or Failed"
Sep 19 13:38:49.954: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 109.470271ms
Sep 19 13:38:52.070: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.225942161s
STEP: Saw pod success
Sep 19 13:38:52.070: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" satisfied condition "Succeeded or Failed"
Sep 19 13:38:52.180: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:38:52.406: INFO: Waiting for pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 to disappear
Sep 19 13:38:52.515: INFO: Pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 no longer exists
STEP: Creating a pod to test service account token: 
Sep 19 13:38:52.630: INFO: Waiting up to 5m0s for pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" in namespace "svcaccounts-6200" to be "Succeeded or Failed"
Sep 19 13:38:52.739: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 109.130955ms
Sep 19 13:38:54.850: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219939306s
STEP: Saw pod success
Sep 19 13:38:54.850: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" satisfied condition "Succeeded or Failed"
Sep 19 13:38:54.962: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:38:55.191: INFO: Waiting for pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 to disappear
Sep 19 13:38:55.299: INFO: Pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 no longer exists
STEP: Creating a pod to test service account token: 
Sep 19 13:38:55.411: INFO: Waiting up to 5m0s for pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" in namespace "svcaccounts-6200" to be "Succeeded or Failed"
Sep 19 13:38:55.520: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 109.24472ms
Sep 19 13:38:57.630: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219370079s
Sep 19 13:38:59.739: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328345098s
Sep 19 13:39:01.848: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437735602s
STEP: Saw pod success
Sep 19 13:39:01.848: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" satisfied condition "Succeeded or Failed"
Sep 19 13:39:01.958: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:39:02.183: INFO: Waiting for pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 to disappear
Sep 19 13:39:02.295: INFO: Pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 no longer exists
STEP: Creating a pod to test service account token: 
Sep 19 13:39:02.405: INFO: Waiting up to 5m0s for pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" in namespace "svcaccounts-6200" to be "Succeeded or Failed"
Sep 19 13:39:02.516: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 110.542253ms
Sep 19 13:39:04.626: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220849838s
Sep 19 13:39:06.736: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330504882s
Sep 19 13:39:08.845: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43983107s
Sep 19 13:39:10.956: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.550554348s
STEP: Saw pod success
Sep 19 13:39:10.956: INFO: Pod "test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7" satisfied condition "Succeeded or Failed"
Sep 19 13:39:11.065: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 container agnhost-container: <nil>
STEP: delete the pod
Sep 19 13:39:11.317: INFO: Waiting for pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 to disappear
Sep 19 13:39:11.426: INFO: Pod test-pod-734bd23d-941b-472a-88b4-3d40f8ff23a7 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:22.463 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:11.662: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
... skipping 49 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Sep 19 13:39:04.548: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 19 13:39:04.548: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-4btd
STEP: Creating a pod to test subpath
Sep 19 13:39:04.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-4btd" in namespace "provisioning-3668" to be "Succeeded or Failed"
Sep 19 13:39:04.776: INFO: Pod "pod-subpath-test-inlinevolume-4btd": Phase="Pending", Reason="", readiness=false. Elapsed: 109.741409ms
Sep 19 13:39:06.886: INFO: Pod "pod-subpath-test-inlinevolume-4btd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219964576s
Sep 19 13:39:08.997: INFO: Pod "pod-subpath-test-inlinevolume-4btd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33045422s
Sep 19 13:39:11.107: INFO: Pod "pod-subpath-test-inlinevolume-4btd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.440337001s
STEP: Saw pod success
Sep 19 13:39:11.107: INFO: Pod "pod-subpath-test-inlinevolume-4btd" satisfied condition "Succeeded or Failed"
Sep 19 13:39:11.216: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-4btd container test-container-volume-inlinevolume-4btd: <nil>
STEP: delete the pod
Sep 19 13:39:11.449: INFO: Waiting for pod pod-subpath-test-inlinevolume-4btd to disappear
Sep 19 13:39:11.558: INFO: Pod pod-subpath-test-inlinevolume-4btd no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-4btd
Sep 19 13:39:11.558: INFO: Deleting pod "pod-subpath-test-inlinevolume-4btd" in namespace "provisioning-3668"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:12.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9311" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":12,"skipped":103,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:12.824: INFO: Only supported for providers [azure] (not aws)
... skipping 103 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":4,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:13.332: INFO: Only supported for providers [gce gke] (not aws)
... skipping 106 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:503
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":5,"skipped":87,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:14.707: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 84 lines ...
STEP: Deleting pod hostexec-ip-172-20-50-204.eu-central-1.compute.internal-d9jcd in namespace volumemode-3908
Sep 19 13:39:06.204: INFO: Deleting pod "pod-876125e0-9c41-4605-9a3b-1a8f81bd5d67" in namespace "volumemode-3908"
Sep 19 13:39:06.316: INFO: Wait up to 5m0s for pod "pod-876125e0-9c41-4605-9a3b-1a8f81bd5d67" to be fully deleted
STEP: Deleting pv and pvc
Sep 19 13:39:08.538: INFO: Deleting PersistentVolumeClaim "pvc-h89qp"
Sep 19 13:39:08.668: INFO: Deleting PersistentVolume "aws-xqfj2"
Sep 19 13:39:09.004: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0a46ee4609af61bdf", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a46ee4609af61bdf is currently attached to i-09af74daadc02aac2
	status code: 400, request id: 6bb8ef76-9d1b-44f2-98d8-f1af605518ab
Sep 19 13:39:14.602: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0a46ee4609af61bdf".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:14.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-3908" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":9,"skipped":64,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:15.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-617" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":10,"skipped":68,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":93,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:02.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:76.189 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":93,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:9.018 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:20.767: INFO: Only supported for providers [azure] (not aws)
... skipping 76 lines ...
• [SLOW TEST:6.822 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":98,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":12,"skipped":65,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:05.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
• [SLOW TEST:15.677 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":13,"skipped":65,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 59 lines ...
Sep 19 13:39:17.017: INFO: Waiting for pod aws-client to disappear
Sep 19 13:39:17.137: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 19 13:39:17.137: INFO: Deleting PersistentVolumeClaim "pvc-69gfw"
Sep 19 13:39:17.255: INFO: Deleting PersistentVolume "aws-twjzq"
Sep 19 13:39:17.987: INFO: Couldn't delete PD "aws://eu-central-1a/vol-01acf6ecad5f09fd9", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01acf6ecad5f09fd9 is currently attached to i-02914935a9a348924
	status code: 400, request id: c28f1dbd-fb33-4217-9996-6e31517f28e0
Sep 19 13:39:23.588: INFO: Successfully deleted PD "aws://eu-central-1a/vol-01acf6ecad5f09fd9".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:23.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1306" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":9,"skipped":111,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:21.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 19 13:39:22.249: INFO: Waiting up to 5m0s for pod "downward-api-253e5346-6aa6-4e73-b2c2-0129f2893385" in namespace "downward-api-3918" to be "Succeeded or Failed"
Sep 19 13:39:22.358: INFO: Pod "downward-api-253e5346-6aa6-4e73-b2c2-0129f2893385": Phase="Pending", Reason="", readiness=false. Elapsed: 108.738669ms
Sep 19 13:39:24.469: INFO: Pod "downward-api-253e5346-6aa6-4e73-b2c2-0129f2893385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220187s
STEP: Saw pod success
Sep 19 13:39:24.469: INFO: Pod "downward-api-253e5346-6aa6-4e73-b2c2-0129f2893385" satisfied condition "Succeeded or Failed"
Sep 19 13:39:24.593: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod downward-api-253e5346-6aa6-4e73-b2c2-0129f2893385 container dapi-container: <nil>
STEP: delete the pod
Sep 19 13:39:24.868: INFO: Waiting for pod downward-api-253e5346-6aa6-4e73-b2c2-0129f2893385 to disappear
Sep 19 13:39:24.977: INFO: Pod downward-api-253e5346-6aa6-4e73-b2c2-0129f2893385 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 69 lines ...
• [SLOW TEST:13.700 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create endpoints for unready pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1648
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":13,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:25.775: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 134 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:26.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-3500" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":14,"skipped":61,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":10,"skipped":115,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:25.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:27.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-1244" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":11,"skipped":115,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:9.824 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":12,"skipped":102,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:28.869: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 292 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":9,"skipped":79,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:29.533: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
Sep 19 13:39:21.222: INFO: PersistentVolumeClaim pvc-cnkln found but phase is Pending instead of Bound.
Sep 19 13:39:23.331: INFO: PersistentVolumeClaim pvc-cnkln found and phase=Bound (6.468187686s)
Sep 19 13:39:23.331: INFO: Waiting up to 3m0s for PersistentVolume local-9l7m9 to have phase Bound
Sep 19 13:39:23.439: INFO: PersistentVolume local-9l7m9 found and phase=Bound (107.506531ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jsw6
STEP: Creating a pod to test subpath
Sep 19 13:39:23.765: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jsw6" in namespace "provisioning-8685" to be "Succeeded or Failed"
Sep 19 13:39:23.872: INFO: Pod "pod-subpath-test-preprovisionedpv-jsw6": Phase="Pending", Reason="", readiness=false. Elapsed: 107.708601ms
Sep 19 13:39:25.992: INFO: Pod "pod-subpath-test-preprovisionedpv-jsw6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227078912s
Sep 19 13:39:28.140: INFO: Pod "pod-subpath-test-preprovisionedpv-jsw6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375668723s
Sep 19 13:39:30.254: INFO: Pod "pod-subpath-test-preprovisionedpv-jsw6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.489200066s
STEP: Saw pod success
Sep 19 13:39:30.254: INFO: Pod "pod-subpath-test-preprovisionedpv-jsw6" satisfied condition "Succeeded or Failed"
Sep 19 13:39:30.362: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-jsw6 container test-container-subpath-preprovisionedpv-jsw6: <nil>
STEP: delete the pod
Sep 19 13:39:30.607: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jsw6 to disappear
Sep 19 13:39:30.716: INFO: Pod pod-subpath-test-preprovisionedpv-jsw6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jsw6
Sep 19 13:39:30.716: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jsw6" in namespace "provisioning-8685"
... skipping 208 lines ...
Sep 19 13:38:46.212: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-q747b] to have phase Bound
Sep 19 13:38:46.325: INFO: PersistentVolumeClaim pvc-q747b found and phase=Bound (112.901067ms)
STEP: Deleting the previously created pod
Sep 19 13:39:00.881: INFO: Deleting pod "pvc-volume-tester-vbdmc" in namespace "csi-mock-volumes-2155"
Sep 19 13:39:00.995: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vbdmc" to be fully deleted
STEP: Checking CSI driver logs
Sep 19 13:39:07.350: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/336b17b0-1d8c-4607-a91c-1cf3ca713cac/volumes/kubernetes.io~csi/pvc-992aa08c-22fb-43cf-ac62-b4946c05d4f0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-vbdmc
Sep 19 13:39:07.350: INFO: Deleting pod "pvc-volume-tester-vbdmc" in namespace "csi-mock-volumes-2155"
STEP: Deleting claim pvc-q747b
Sep 19 13:39:07.695: INFO: Waiting up to 2m0s for PersistentVolume pvc-992aa08c-22fb-43cf-ac62-b4946c05d4f0 to get deleted
Sep 19 13:39:07.822: INFO: PersistentVolume pvc-992aa08c-22fb-43cf-ac62-b4946c05d4f0 found and phase=Released (126.645102ms)
Sep 19 13:39:09.933: INFO: PersistentVolume pvc-992aa08c-22fb-43cf-ac62-b4946c05d4f0 found and phase=Released (2.237078573s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1496
    token should not be plumbed down when csiServiceAccountTokenEnabled=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1524
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":6,"skipped":25,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":13,"skipped":116,"failed":0}
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:33.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv-protection
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
Sep 19 13:39:34.963: INFO: AfterEach: Cleaning up test resources.
Sep 19 13:39:34.963: INFO: Deleting PersistentVolumeClaim "pvc-dvc5d"
Sep 19 13:39:35.071: INFO: Deleting PersistentVolume "hostpath-nx62v"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":14,"skipped":116,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":11,"skipped":75,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:35.450: INFO: Driver local doesn't support GenericEphemeralVolume -- skipping
[AfterEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
Sep 19 13:39:22.447: INFO: PersistentVolumeClaim pvc-qblzg found but phase is Pending instead of Bound.
Sep 19 13:39:24.562: INFO: PersistentVolumeClaim pvc-qblzg found and phase=Bound (14.894204581s)
Sep 19 13:39:24.562: INFO: Waiting up to 3m0s for PersistentVolume local-gzvgh to have phase Bound
Sep 19 13:39:24.684: INFO: PersistentVolume local-gzvgh found and phase=Bound (122.198331ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xlr6
STEP: Creating a pod to test subpath
Sep 19 13:39:25.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xlr6" in namespace "provisioning-5697" to be "Succeeded or Failed"
Sep 19 13:39:25.172: INFO: Pod "pod-subpath-test-preprovisionedpv-xlr6": Phase="Pending", Reason="", readiness=false. Elapsed: 148.923572ms
Sep 19 13:39:27.282: INFO: Pod "pod-subpath-test-preprovisionedpv-xlr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259068524s
Sep 19 13:39:29.392: INFO: Pod "pod-subpath-test-preprovisionedpv-xlr6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369206695s
Sep 19 13:39:31.504: INFO: Pod "pod-subpath-test-preprovisionedpv-xlr6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481570133s
Sep 19 13:39:33.618: INFO: Pod "pod-subpath-test-preprovisionedpv-xlr6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595064938s
Sep 19 13:39:35.727: INFO: Pod "pod-subpath-test-preprovisionedpv-xlr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.704719886s
STEP: Saw pod success
Sep 19 13:39:35.727: INFO: Pod "pod-subpath-test-preprovisionedpv-xlr6" satisfied condition "Succeeded or Failed"
Sep 19 13:39:35.837: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-xlr6 container test-container-subpath-preprovisionedpv-xlr6: <nil>
STEP: delete the pod
Sep 19 13:39:36.065: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xlr6 to disappear
Sep 19 13:39:36.175: INFO: Pod pod-subpath-test-preprovisionedpv-xlr6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xlr6
Sep 19 13:39:36.175: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xlr6" in namespace "provisioning-5697"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":113,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:37.793: INFO: Only supported for providers [openstack] (not aws)
... skipping 45 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Sep 19 13:39:36.194: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-8ba9c62d-0549-4f04-b103-faba5784cde8" in namespace "security-context-test-1090" to be "Succeeded or Failed"
Sep 19 13:39:36.304: INFO: Pod "busybox-privileged-true-8ba9c62d-0549-4f04-b103-faba5784cde8": Phase="Pending", Reason="", readiness=false. Elapsed: 110.441513ms
Sep 19 13:39:38.416: INFO: Pod "busybox-privileged-true-8ba9c62d-0549-4f04-b103-faba5784cde8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22148486s
Sep 19 13:39:38.416: INFO: Pod "busybox-privileged-true-8ba9c62d-0549-4f04-b103-faba5784cde8" satisfied condition "Succeeded or Failed"
Sep 19 13:39:38.528: INFO: Got logs for pod "busybox-privileged-true-8ba9c62d-0549-4f04-b103-faba5784cde8": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:38.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1090" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":12,"skipped":82,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:39:35.454: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062" in namespace "projected-2884" to be "Succeeded or Failed"
Sep 19 13:39:35.564: INFO: Pod "downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062": Phase="Pending", Reason="", readiness=false. Elapsed: 109.960687ms
Sep 19 13:39:37.674: INFO: Pod "downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219853082s
Sep 19 13:39:39.784: INFO: Pod "downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329530328s
STEP: Saw pod success
Sep 19 13:39:39.784: INFO: Pod "downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062" satisfied condition "Succeeded or Failed"
Sep 19 13:39:39.893: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062 container client-container: <nil>
STEP: delete the pod
Sep 19 13:39:40.123: INFO: Waiting for pod downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062 to disappear
Sep 19 13:39:40.232: INFO: Pod downwardapi-volume-9d8411e5-984d-4004-ba9e-b7436401a062 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.666 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":6,"skipped":66,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:34.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:9.101 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":7,"skipped":66,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:43.631: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 164 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":12,"skipped":117,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:44.922: INFO: Only supported for providers [vsphere] (not aws)
... skipping 58 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":100,"failed":0}
[BeforeEach] [sig-node] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:25.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 53 lines ...
• [SLOW TEST:21.223 seconds]
[sig-node] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
Sep 19 13:37:00.930: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7120
Sep 19 13:37:01.042: INFO: creating *v1.StatefulSet: csi-mock-volumes-7120-8334/csi-mockplugin-attacher
Sep 19 13:37:01.154: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7120"
Sep 19 13:37:01.265: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7120 to register on node ip-172-20-55-38.eu-central-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Sep 19 13:37:13.166: INFO: Error getting logs for pod inline-volume-cp5tj: the server rejected our request for an unknown reason (get pods inline-volume-cp5tj)
Sep 19 13:37:13.278: INFO: Deleting pod "inline-volume-cp5tj" in namespace "csi-mock-volumes-7120"
Sep 19 13:37:13.393: INFO: Wait up to 5m0s for pod "inline-volume-cp5tj" to be fully deleted
STEP: Deleting the previously created pod
Sep 19 13:39:17.615: INFO: Deleting pod "pvc-volume-tester-slqxq" in namespace "csi-mock-volumes-7120"
Sep 19 13:39:17.729: INFO: Wait up to 5m0s for pod "pvc-volume-tester-slqxq" to be fully deleted
STEP: Checking CSI driver logs
Sep 19 13:39:20.116: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: c49255cd-cd88-4ad2-aab5-aee75efb0f0a
Sep 19 13:39:20.116: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Sep 19 13:39:20.116: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Sep 19 13:39:20.116: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-slqxq
Sep 19 13:39:20.116: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-7120
Sep 19 13:39:20.116: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-157e33b1a62f0fb1d386f2a274f01bc39370291c234f69d18b6c38c5b7689792","target_path":"/var/lib/kubelet/pods/c49255cd-cd88-4ad2-aab5-aee75efb0f0a/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-slqxq
Sep 19 13:39:20.116: INFO: Deleting pod "pvc-volume-tester-slqxq" in namespace "csi-mock-volumes-7120"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-7120
STEP: Waiting for namespaces [csi-mock-volumes-7120] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":4,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:46.799: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 62 lines ...
• [SLOW TEST:8.331 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":84,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:47.128: INFO: Only supported for providers [azure] (not aws)
... skipping 100 lines ...
      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:40.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 19 13:39:44.297: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2c287d2f-291c-4e25-b414-e036f815a99c"
Sep 19 13:39:44.297: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2c287d2f-291c-4e25-b414-e036f815a99c" in namespace "pods-9624" to be "terminated due to deadline exceeded"
Sep 19 13:39:44.406: INFO: Pod "pod-update-activedeadlineseconds-2c287d2f-291c-4e25-b414-e036f815a99c": Phase="Running", Reason="", readiness=true. Elapsed: 109.1814ms
Sep 19 13:39:46.517: INFO: Pod "pod-update-activedeadlineseconds-2c287d2f-291c-4e25-b414-e036f815a99c": Phase="Running", Reason="", readiness=true. Elapsed: 2.219930277s
Sep 19 13:39:48.628: INFO: Pod "pod-update-activedeadlineseconds-2c287d2f-291c-4e25-b414-e036f815a99c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.331267937s
Sep 19 13:39:48.628: INFO: Pod "pod-update-activedeadlineseconds-2c287d2f-291c-4e25-b414-e036f815a99c" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:48.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9624" for this suite.


• [SLOW TEST:8.386 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1483
STEP: Waiting until pod test-pod will start running in namespace statefulset-1483
STEP: Creating statefulset with conflicting port in namespace statefulset-1483
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1483
Sep 19 13:39:36.440: INFO: Observed stateful pod in namespace: statefulset-1483, name: ss-0, uid: f687278d-7d6e-4f50-b79e-84aba728625c, status phase: Pending. Waiting for statefulset controller to delete.
Sep 19 13:39:37.591: INFO: Observed stateful pod in namespace: statefulset-1483, name: ss-0, uid: f687278d-7d6e-4f50-b79e-84aba728625c, status phase: Failed. Waiting for statefulset controller to delete.
Sep 19 13:39:37.599: INFO: Observed stateful pod in namespace: statefulset-1483, name: ss-0, uid: f687278d-7d6e-4f50-b79e-84aba728625c, status phase: Failed. Waiting for statefulset controller to delete.
Sep 19 13:39:37.602: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1483
STEP: Removing pod with conflicting port in namespace statefulset-1483
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1483 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Sep 19 13:39:42.051: INFO: Deleting all statefulset in ns statefulset-1483
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":13,"skipped":128,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
... skipping 9 lines ...
Sep 19 13:39:22.246: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-9608gs7vc
STEP: creating a claim
Sep 19 13:39:22.355: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-swjc
STEP: Creating a pod to test exec-volume-test
Sep 19 13:39:22.695: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-swjc" in namespace "volume-9608" to be "Succeeded or Failed"
Sep 19 13:39:22.804: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.131047ms
Sep 19 13:39:24.921: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22644596s
Sep 19 13:39:27.032: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336858535s
Sep 19 13:39:29.153: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.457602685s
Sep 19 13:39:31.264: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56946429s
Sep 19 13:39:33.377: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.681573711s
Sep 19 13:39:35.488: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.792880638s
Sep 19 13:39:37.598: INFO: Pod "exec-volume-test-dynamicpv-swjc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.902772735s
STEP: Saw pod success
Sep 19 13:39:37.598: INFO: Pod "exec-volume-test-dynamicpv-swjc" satisfied condition "Succeeded or Failed"
Sep 19 13:39:37.707: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod exec-volume-test-dynamicpv-swjc container exec-container-dynamicpv-swjc: <nil>
STEP: delete the pod
Sep 19 13:39:37.932: INFO: Waiting for pod exec-volume-test-dynamicpv-swjc to disappear
Sep 19 13:39:38.041: INFO: Pod exec-volume-test-dynamicpv-swjc no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-swjc
Sep 19 13:39:38.041: INFO: Deleting pod "exec-volume-test-dynamicpv-swjc" in namespace "volume-9608"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":14,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:39:48.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 19 13:39:49.540: INFO: Waiting up to 5m0s for pod "pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3" in namespace "emptydir-2518" to be "Succeeded or Failed"
Sep 19 13:39:49.659: INFO: Pod "pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3": Phase="Pending", Reason="", readiness=false. Elapsed: 118.916403ms
Sep 19 13:39:51.768: INFO: Pod "pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228599021s
Sep 19 13:39:53.878: INFO: Pod "pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33870512s
STEP: Saw pod success
Sep 19 13:39:53.878: INFO: Pod "pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3" satisfied condition "Succeeded or Failed"
Sep 19 13:39:53.988: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3 container test-container: <nil>
STEP: delete the pod
Sep 19 13:39:54.214: INFO: Waiting for pod pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3 to disappear
Sep 19 13:39:54.323: INFO: Pod pod-dc32d222-4016-4bf6-98ee-7cf4a0d3d7c3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.665 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":31,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:39:55.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9440" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":10,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:55.738: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":83,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:57.942: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":9,"skipped":101,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:73.544 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe that the PodDisruptionBudget status is not updated for unmanaged pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:191
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods","total":-1,"completed":10,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:39:59.254: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-9765c0bd-a8d6-4ccf-98a4-08f8c690625b
STEP: Creating a pod to test consume secrets
Sep 19 13:39:58.727: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6fe198f-a323-4082-ae08-8f81c9852970" in namespace "projected-4596" to be "Succeeded or Failed"
Sep 19 13:39:58.837: INFO: Pod "pod-projected-secrets-c6fe198f-a323-4082-ae08-8f81c9852970": Phase="Pending", Reason="", readiness=false. Elapsed: 110.05001ms
Sep 19 13:40:00.947: INFO: Pod "pod-projected-secrets-c6fe198f-a323-4082-ae08-8f81c9852970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220690574s
STEP: Saw pod success
Sep 19 13:40:00.947: INFO: Pod "pod-projected-secrets-c6fe198f-a323-4082-ae08-8f81c9852970" satisfied condition "Succeeded or Failed"
Sep 19 13:40:01.058: INFO: Trying to get logs from node ip-172-20-55-38.eu-central-1.compute.internal pod pod-projected-secrets-c6fe198f-a323-4082-ae08-8f81c9852970 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 19 13:40:01.296: INFO: Waiting for pod pod-projected-secrets-c6fe198f-a323-4082-ae08-8f81c9852970 to disappear
Sep 19 13:40:01.406: INFO: Pod pod-projected-secrets-c6fe198f-a323-4082-ae08-8f81c9852970 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:40:01.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4596" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":86,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:01.647: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 42 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":5,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:01.835: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 133 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":13,"skipped":124,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 16 lines ...
Sep 19 13:39:51.967: INFO: PersistentVolumeClaim pvc-hzv6k found but phase is Pending instead of Bound.
Sep 19 13:39:54.077: INFO: PersistentVolumeClaim pvc-hzv6k found and phase=Bound (4.330408671s)
Sep 19 13:39:54.077: INFO: Waiting up to 3m0s for PersistentVolume local-csfck to have phase Bound
Sep 19 13:39:54.188: INFO: PersistentVolume local-csfck found and phase=Bound (110.875966ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-f9d9
STEP: Creating a pod to test subpath
Sep 19 13:39:54.520: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-f9d9" in namespace "provisioning-2985" to be "Succeeded or Failed"
Sep 19 13:39:54.629: INFO: Pod "pod-subpath-test-preprovisionedpv-f9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.006439ms
Sep 19 13:39:56.739: INFO: Pod "pod-subpath-test-preprovisionedpv-f9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219343267s
Sep 19 13:39:58.849: INFO: Pod "pod-subpath-test-preprovisionedpv-f9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329449472s
Sep 19 13:40:00.959: INFO: Pod "pod-subpath-test-preprovisionedpv-f9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.438800873s
Sep 19 13:40:03.069: INFO: Pod "pod-subpath-test-preprovisionedpv-f9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548910667s
Sep 19 13:40:05.180: INFO: Pod "pod-subpath-test-preprovisionedpv-f9d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.660037734s
STEP: Saw pod success
Sep 19 13:40:05.180: INFO: Pod "pod-subpath-test-preprovisionedpv-f9d9" satisfied condition "Succeeded or Failed"
Sep 19 13:40:05.291: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-f9d9 container test-container-volume-preprovisionedpv-f9d9: <nil>
STEP: delete the pod
Sep 19 13:40:05.528: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-f9d9 to disappear
Sep 19 13:40:05.638: INFO: Pod pod-subpath-test-preprovisionedpv-f9d9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-f9d9
Sep 19 13:40:05.638: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-f9d9" in namespace "provisioning-2985"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":99,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 131 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":3,"skipped":31,"failed":1,"failures":["[sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:07.507: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 75 lines ...
• [SLOW TEST:16.573 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":14,"skipped":143,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:09.941: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 89 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445

      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":9,"skipped":100,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:09.999: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 148 lines ...
Sep 19 13:39:20.258: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep 19 13:39:20.380: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathxslfm] to have phase Bound
Sep 19 13:39:20.488: INFO: PersistentVolumeClaim csi-hostpathxslfm found but phase is Pending instead of Bound.
Sep 19 13:39:22.600: INFO: PersistentVolumeClaim csi-hostpathxslfm found and phase=Bound (2.220015551s)
STEP: Creating pod pod-subpath-test-dynamicpv-cmxg
STEP: Creating a pod to test atomic-volume-subpath
Sep 19 13:39:22.931: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-cmxg" in namespace "provisioning-1700" to be "Succeeded or Failed"
Sep 19 13:39:23.040: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Pending", Reason="", readiness=false. Elapsed: 108.387692ms
Sep 19 13:39:25.178: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246315062s
Sep 19 13:39:27.286: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355235672s
Sep 19 13:39:29.395: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463917779s
Sep 19 13:39:31.505: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.573610685s
Sep 19 13:39:33.617: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.686011737s
... skipping 5 lines ...
Sep 19 13:39:46.276: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Running", Reason="", readiness=true. Elapsed: 23.344627452s
Sep 19 13:39:48.386: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Running", Reason="", readiness=true. Elapsed: 25.454429693s
Sep 19 13:39:50.501: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Running", Reason="", readiness=true. Elapsed: 27.569616008s
Sep 19 13:39:52.618: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Running", Reason="", readiness=true. Elapsed: 29.686362132s
Sep 19 13:39:54.728: INFO: Pod "pod-subpath-test-dynamicpv-cmxg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.796738687s
STEP: Saw pod success
Sep 19 13:39:54.728: INFO: Pod "pod-subpath-test-dynamicpv-cmxg" satisfied condition "Succeeded or Failed"
Sep 19 13:39:54.841: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-cmxg container test-container-subpath-dynamicpv-cmxg: <nil>
STEP: delete the pod
Sep 19 13:39:55.074: INFO: Waiting for pod pod-subpath-test-dynamicpv-cmxg to disappear
Sep 19 13:39:55.189: INFO: Pod pod-subpath-test-dynamicpv-cmxg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-cmxg
Sep 19 13:39:55.189: INFO: Deleting pod "pod-subpath-test-dynamicpv-cmxg" in namespace "provisioning-1700"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:12.633: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename volume-provisioning
W0919 13:35:11.074253    4866 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 19 13:35:11.074: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Sep 19 13:35:11.525: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep 19 13:40:12.087: INFO: The test missed event about failed provisioning, but checked that no volume was provisioned for 5m0s
Sep 19 13:40:12.087: INFO: deleting claim "volume-provisioning-1749"/"pvc-2wkq4"
Sep 19 13:40:12.201: INFO: deleting storage class volume-provisioning-1749-invalid-awsgs4zp
[AfterEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:40:12.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-1749" for this suite.


• [SLOW TEST:302.700 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:737
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:738
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:12.763: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:38:59.737: INFO: >>> kubeConfig: /root/.kube/config
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":9,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:13.669: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-52da67e2-c38e-470c-8aa0-22cc41ae30cb
STEP: Creating a pod to test consume configMaps
Sep 19 13:40:10.902: INFO: Waiting up to 5m0s for pod "pod-configmaps-16eb81e0-c349-4572-aa00-589ec8222e91" in namespace "configmap-4989" to be "Succeeded or Failed"
Sep 19 13:40:11.011: INFO: Pod "pod-configmaps-16eb81e0-c349-4572-aa00-589ec8222e91": Phase="Pending", Reason="", readiness=false. Elapsed: 109.056213ms
Sep 19 13:40:13.121: INFO: Pod "pod-configmaps-16eb81e0-c349-4572-aa00-589ec8222e91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218841694s
STEP: Saw pod success
Sep 19 13:40:13.121: INFO: Pod "pod-configmaps-16eb81e0-c349-4572-aa00-589ec8222e91" satisfied condition "Succeeded or Failed"
Sep 19 13:40:13.230: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod pod-configmaps-16eb81e0-c349-4572-aa00-589ec8222e91 container configmap-volume-test: <nil>
STEP: delete the pod
Sep 19 13:40:13.454: INFO: Waiting for pod pod-configmaps-16eb81e0-c349-4572-aa00-589ec8222e91 to disappear
Sep 19 13:40:13.563: INFO: Pod pod-configmaps-16eb81e0-c349-4572-aa00-589ec8222e91 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 94 lines ...
Sep 19 13:39:22.102: INFO: Terminating ReplicationController up-down-1 pods took: 100.159089ms
STEP: verifying service up-down-1 is not up
Sep 19 13:39:26.925: INFO: Creating new host exec pod
Sep 19 13:39:27.142: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:39:29.251: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Sep 19 13:39:31.253: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Sep 19 13:39:31.253: INFO: Running '/tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7350 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.173.105:80 && echo service-down-failed'
Sep 19 13:39:34.399: INFO: rc: 28
Sep 19 13:39:34.399: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.69.173.105:80 && echo service-down-failed" in pod services-7350/verify-service-down-host-exec-pod: error running /tmp/kubectl127988482/kubectl --server=https://api.e2e-aec27c8c61-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7350 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.69.173.105:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.69.173.105:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-7350
STEP: verifying service up-down-2 is still up
Sep 19 13:39:34.521: INFO: Creating new host exec pod
Sep 19 13:39:34.739: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
... skipping 64 lines ...
• [SLOW TEST:95.571 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1036
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":16,"skipped":99,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:15.165: INFO: Driver emptydir doesn't support GenericEphemeralVolume -- skipping
... skipping 114 lines ...
Sep 19 13:39:46.071: INFO: Unable to read jessie_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:46.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:46.287: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:46.396: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:46.504: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:46.613: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:47.049: INFO: Lookups using dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6929 wheezy_tcp@dns-test-service.dns-6929 wheezy_udp@dns-test-service.dns-6929.svc wheezy_tcp@dns-test-service.dns-6929.svc wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6929 jessie_tcp@dns-test-service.dns-6929 jessie_udp@dns-test-service.dns-6929.svc jessie_tcp@dns-test-service.dns-6929.svc jessie_udp@_http._tcp.dns-test-service.dns-6929.svc jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc]

Sep 19 13:39:52.163: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:52.271: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:52.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:52.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:52.603: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
... skipping 5 lines ...
Sep 19 13:39:53.695: INFO: Unable to read jessie_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:53.803: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:53.912: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:54.020: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:54.130: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:54.238: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:54.679: INFO: Lookups using dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6929 wheezy_tcp@dns-test-service.dns-6929 wheezy_udp@dns-test-service.dns-6929.svc wheezy_tcp@dns-test-service.dns-6929.svc wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6929 jessie_tcp@dns-test-service.dns-6929 jessie_udp@dns-test-service.dns-6929.svc jessie_tcp@dns-test-service.dns-6929.svc jessie_udp@_http._tcp.dns-test-service.dns-6929.svc jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc]

Sep 19 13:39:57.160: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:57.272: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:57.389: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:57.499: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:57.607: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
... skipping 5 lines ...
Sep 19 13:39:58.707: INFO: Unable to read jessie_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:58.816: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:58.926: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:59.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:59.143: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:59.251: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:39:59.688: INFO: Lookups using dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6929 wheezy_tcp@dns-test-service.dns-6929 wheezy_udp@dns-test-service.dns-6929.svc wheezy_tcp@dns-test-service.dns-6929.svc wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6929 jessie_tcp@dns-test-service.dns-6929 jessie_udp@dns-test-service.dns-6929.svc jessie_tcp@dns-test-service.dns-6929.svc jessie_udp@_http._tcp.dns-test-service.dns-6929.svc jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc]

Sep 19 13:40:02.159: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:02.267: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:02.380: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:02.489: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:02.602: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
... skipping 5 lines ...
Sep 19 13:40:03.710: INFO: Unable to read jessie_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:03.818: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:03.927: INFO: Unable to read jessie_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:04.039: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:04.149: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:04.263: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:04.713: INFO: Lookups using dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6929 wheezy_tcp@dns-test-service.dns-6929 wheezy_udp@dns-test-service.dns-6929.svc wheezy_tcp@dns-test-service.dns-6929.svc wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6929 jessie_tcp@dns-test-service.dns-6929 jessie_udp@dns-test-service.dns-6929.svc jessie_tcp@dns-test-service.dns-6929.svc jessie_udp@_http._tcp.dns-test-service.dns-6929.svc jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc]

Sep 19 13:40:07.158: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:07.266: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:07.391: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:07.500: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:07.611: INFO: Unable to read wheezy_udp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:07.725: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:07.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:07.944: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:08.493: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:08.820: INFO: Unable to read jessie_tcp@dns-test-service.dns-6929 from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:09.257: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc from pod dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f: the server could not find the requested resource (get pods dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f)
Sep 19 13:40:09.775: INFO: Lookups using dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6929 wheezy_tcp@dns-test-service.dns-6929 wheezy_udp@dns-test-service.dns-6929.svc wheezy_tcp@dns-test-service.dns-6929.svc wheezy_udp@_http._tcp.dns-test-service.dns-6929.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6929.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service.dns-6929 jessie_tcp@_http._tcp.dns-test-service.dns-6929.svc]

Sep 19 13:40:14.674: INFO: DNS probes using dns-6929/dns-test-09d8e96e-0572-4cd9-abb6-0dac8abced3f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:40.093 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":15,"skipped":117,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":120,"failed":0}
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:40:13.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename metrics-grabber
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
STEP: Destroying namespace "services-4863" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:40:15.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
STEP: Destroying namespace "services-3315" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":11,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:18.220: INFO: Only supported for providers [vsphere] (not aws)
... skipping 178 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":7,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:19.609: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:40:13.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407" in namespace "projected-9489" to be "Succeeded or Failed"
Sep 19 13:40:13.446: INFO: Pod "downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407": Phase="Pending", Reason="", readiness=false. Elapsed: 110.067617ms
Sep 19 13:40:15.560: INFO: Pod "downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224163066s
Sep 19 13:40:17.670: INFO: Pod "downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333441214s
Sep 19 13:40:19.778: INFO: Pod "downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.442001205s
STEP: Saw pod success
Sep 19 13:40:19.778: INFO: Pod "downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407" satisfied condition "Succeeded or Failed"
Sep 19 13:40:19.887: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407 container client-container: <nil>
STEP: delete the pod
Sep 19 13:40:20.112: INFO: Waiting for pod downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407 to disappear
Sep 19 13:40:20.220: INFO: Pod downwardapi-volume-0a0a6536-b5da-4e2c-a5f0-5375067d2407 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.776 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:20.470: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 44 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:40:20.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9823" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":12,"skipped":67,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 99 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:997
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1042
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":4,"skipped":39,"failed":1,"failures":["[sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:22.586: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 62 lines ...
Sep 19 13:40:15.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 19 13:40:15.892: INFO: Waiting up to 5m0s for pod "downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d" in namespace "downward-api-2512" to be "Succeeded or Failed"
Sep 19 13:40:16.000: INFO: Pod "downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 107.968754ms
Sep 19 13:40:18.109: INFO: Pod "downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216778059s
Sep 19 13:40:20.221: INFO: Pod "downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328505364s
Sep 19 13:40:22.339: INFO: Pod "downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446334359s
STEP: Saw pod success
Sep 19 13:40:22.339: INFO: Pod "downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d" satisfied condition "Succeeded or Failed"
Sep 19 13:40:22.463: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d container dapi-container: <nil>
STEP: delete the pod
Sep 19 13:40:22.701: INFO: Waiting for pod downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d to disappear
Sep 19 13:40:22.810: INFO: Pod downward-api-4d2dffa7-c06e-4654-9629-1bf78a2e7d1d no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.794 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":113,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Sep 19 13:39:52.361: INFO: PersistentVolumeClaim pvc-p7qjt found but phase is Pending instead of Bound.
Sep 19 13:39:54.471: INFO: PersistentVolumeClaim pvc-p7qjt found and phase=Bound (10.660793624s)
Sep 19 13:39:54.471: INFO: Waiting up to 3m0s for PersistentVolume local-5ghhg to have phase Bound
Sep 19 13:39:54.579: INFO: PersistentVolume local-5ghhg found and phase=Bound (108.130694ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cb2z
STEP: Creating a pod to test atomic-volume-subpath
Sep 19 13:39:54.914: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cb2z" in namespace "provisioning-6990" to be "Succeeded or Failed"
Sep 19 13:39:55.026: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Pending", Reason="", readiness=false. Elapsed: 112.115931ms
Sep 19 13:39:57.136: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221693194s
Sep 19 13:39:59.245: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330802165s
Sep 19 13:40:01.355: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440650899s
Sep 19 13:40:03.478: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563684748s
Sep 19 13:40:05.588: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Running", Reason="", readiness=true. Elapsed: 10.673570089s
... skipping 3 lines ...
Sep 19 13:40:14.032: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Running", Reason="", readiness=true. Elapsed: 19.117865637s
Sep 19 13:40:16.143: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Running", Reason="", readiness=true. Elapsed: 21.228213184s
Sep 19 13:40:18.252: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Running", Reason="", readiness=true. Elapsed: 23.337672359s
Sep 19 13:40:20.366: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Running", Reason="", readiness=true. Elapsed: 25.451637244s
Sep 19 13:40:22.482: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.567945018s
STEP: Saw pod success
Sep 19 13:40:22.482: INFO: Pod "pod-subpath-test-preprovisionedpv-cb2z" satisfied condition "Succeeded or Failed"
Sep 19 13:40:22.600: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-cb2z container test-container-subpath-preprovisionedpv-cb2z: <nil>
STEP: delete the pod
Sep 19 13:40:22.832: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cb2z to disappear
Sep 19 13:40:22.943: INFO: Pod pod-subpath-test-preprovisionedpv-cb2z no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cb2z
Sep 19 13:40:22.943: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cb2z" in namespace "provisioning-6990"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":10,"skipped":119,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:24.565: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 9 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:40:24.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":11,"skipped":125,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 59 lines ...
Sep 19 13:40:19.231: INFO: Waiting for pod aws-client to disappear
Sep 19 13:40:19.341: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 19 13:40:19.341: INFO: Deleting PersistentVolumeClaim "pvc-l575b"
Sep 19 13:40:19.451: INFO: Deleting PersistentVolume "aws-gw8m9"
Sep 19 13:40:20.180: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0a7118afe3756eddc", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0a7118afe3756eddc is currently attached to i-02914935a9a348924
	status code: 400, request id: f0bd3b97-8a6c-4fb2-8507-f7fb736aab43
Sep 19 13:40:25.877: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0a7118afe3756eddc".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:40:25.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9062" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":91,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Sep 19 13:39:59.818: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-75372hkj7
STEP: creating a claim
Sep 19 13:39:59.928: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-crkb
STEP: Creating a pod to test subpath
Sep 19 13:40:00.259: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-crkb" in namespace "provisioning-7537" to be "Succeeded or Failed"
Sep 19 13:40:00.371: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 110.96203ms
Sep 19 13:40:02.480: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220946591s
Sep 19 13:40:04.594: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334145628s
Sep 19 13:40:06.704: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443965427s
Sep 19 13:40:08.813: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553309759s
Sep 19 13:40:10.928: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668650748s
Sep 19 13:40:13.038: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778536066s
Sep 19 13:40:15.150: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.890279491s
Sep 19 13:40:17.259: INFO: Pod "pod-subpath-test-dynamicpv-crkb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.999507835s
STEP: Saw pod success
Sep 19 13:40:17.259: INFO: Pod "pod-subpath-test-dynamicpv-crkb" satisfied condition "Succeeded or Failed"
Sep 19 13:40:17.368: INFO: Trying to get logs from node ip-172-20-62-71.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-crkb container test-container-subpath-dynamicpv-crkb: <nil>
STEP: delete the pod
Sep 19 13:40:17.596: INFO: Waiting for pod pod-subpath-test-dynamicpv-crkb to disappear
Sep 19 13:40:17.705: INFO: Pod pod-subpath-test-dynamicpv-crkb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-crkb
Sep 19 13:40:17.705: INFO: Deleting pod "pod-subpath-test-dynamicpv-crkb" in namespace "provisioning-7537"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":11,"skipped":101,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:28.954: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 21 lines ...
Sep 19 13:40:19.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Sep 19 13:40:20.364: INFO: Waiting up to 5m0s for pod "security-context-3ec788d5-7ca3-410e-900e-fc27b1507674" in namespace "security-context-7911" to be "Succeeded or Failed"
Sep 19 13:40:20.473: INFO: Pod "security-context-3ec788d5-7ca3-410e-900e-fc27b1507674": Phase="Pending", Reason="", readiness=false. Elapsed: 108.846378ms
Sep 19 13:40:22.594: INFO: Pod "security-context-3ec788d5-7ca3-410e-900e-fc27b1507674": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229496863s
Sep 19 13:40:24.703: INFO: Pod "security-context-3ec788d5-7ca3-410e-900e-fc27b1507674": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339419173s
Sep 19 13:40:26.814: INFO: Pod "security-context-3ec788d5-7ca3-410e-900e-fc27b1507674": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449734988s
Sep 19 13:40:28.923: INFO: Pod "security-context-3ec788d5-7ca3-410e-900e-fc27b1507674": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.558900544s
STEP: Saw pod success
Sep 19 13:40:28.923: INFO: Pod "security-context-3ec788d5-7ca3-410e-900e-fc27b1507674" satisfied condition "Succeeded or Failed"
Sep 19 13:40:29.031: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod security-context-3ec788d5-7ca3-410e-900e-fc27b1507674 container test-container: <nil>
STEP: delete the pod
Sep 19 13:40:29.254: INFO: Waiting for pod security-context-3ec788d5-7ca3-410e-900e-fc27b1507674 to disappear
Sep 19 13:40:29.362: INFO: Pod security-context-3ec788d5-7ca3-410e-900e-fc27b1507674 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.885 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":8,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:29.595: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Sep 19 13:40:25.606: INFO: Waiting up to 5m0s for pod "metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16" in namespace "downward-api-9213" to be "Succeeded or Failed"
Sep 19 13:40:25.715: INFO: Pod "metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16": Phase="Pending", Reason="", readiness=false. Elapsed: 109.36742ms
Sep 19 13:40:27.824: INFO: Pod "metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218317729s
Sep 19 13:40:29.934: INFO: Pod "metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328375292s
STEP: Saw pod success
Sep 19 13:40:29.934: INFO: Pod "metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16" satisfied condition "Succeeded or Failed"
Sep 19 13:40:30.043: INFO: Trying to get logs from node ip-172-20-48-58.eu-central-1.compute.internal pod metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16 container client-container: <nil>
STEP: delete the pod
Sep 19 13:40:30.284: INFO: Waiting for pod metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16 to disappear
Sep 19 13:40:30.393: INFO: Pod metadata-volume-7c8bd3af-a1dc-4537-a87d-bfcbc4f2ea16 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.668 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":12,"skipped":127,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:30.623: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 79 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":6,"skipped":47,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:30.692: INFO: Only supported for providers [openstack] (not aws)
... skipping 30 lines ...
Sep 19 13:39:47.380: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7765zpfqk
STEP: creating a claim
Sep 19 13:39:47.491: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-vjq4
STEP: Creating a pod to test subpath
Sep 19 13:39:47.827: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-vjq4" in namespace "provisioning-7765" to be "Succeeded or Failed"
Sep 19 13:39:47.942: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 115.043491ms
Sep 19 13:39:50.054: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226997029s
Sep 19 13:39:52.165: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338243144s
Sep 19 13:39:54.276: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449100377s
Sep 19 13:39:56.388: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561288678s
Sep 19 13:39:58.500: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.672791042s
... skipping 5 lines ...
Sep 19 13:40:11.179: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.351634233s
Sep 19 13:40:13.290: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 25.462929215s
Sep 19 13:40:15.402: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.574467853s
Sep 19 13:40:17.514: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.686373478s
Sep 19 13:40:19.626: INFO: Pod "pod-subpath-test-dynamicpv-vjq4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.798455572s
STEP: Saw pod success
Sep 19 13:40:19.626: INFO: Pod "pod-subpath-test-dynamicpv-vjq4" satisfied condition "Succeeded or Failed"
Sep 19 13:40:19.736: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-vjq4 container test-container-subpath-dynamicpv-vjq4: <nil>
STEP: delete the pod
Sep 19 13:40:19.966: INFO: Waiting for pod pod-subpath-test-dynamicpv-vjq4 to disappear
Sep 19 13:40:20.080: INFO: Pod pod-subpath-test-dynamicpv-vjq4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-vjq4
Sep 19 13:40:20.080: INFO: Deleting pod "pod-subpath-test-dynamicpv-vjq4" in namespace "provisioning-7765"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":24,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:31.375: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 19 13:40:31.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8423" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":13,"skipped":134,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:32.118: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":7,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 19 13:40:13.832: INFO: >>> kubeConfig: /root/.kube/config
... skipping 11 lines ...
Sep 19 13:40:22.601: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-4hv4k] to have phase Bound
Sep 19 13:40:22.712: INFO: PersistentVolumeClaim pvc-4hv4k found and phase=Bound (111.247095ms)
Sep 19 13:40:22.712: INFO: Waiting up to 3m0s for PersistentVolume local-crzmr to have phase Bound
Sep 19 13:40:22.822: INFO: PersistentVolume local-crzmr found and phase=Bound (110.144982ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-dqk4
STEP: Creating a pod to test exec-volume-test
Sep 19 13:40:23.171: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-dqk4" in namespace "volume-6922" to be "Succeeded or Failed"
Sep 19 13:40:23.292: INFO: Pod "exec-volume-test-preprovisionedpv-dqk4": Phase="Pending", Reason="", readiness=false. Elapsed: 120.962313ms
Sep 19 13:40:25.403: INFO: Pod "exec-volume-test-preprovisionedpv-dqk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232533652s
Sep 19 13:40:27.516: INFO: Pod "exec-volume-test-preprovisionedpv-dqk4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344611102s
Sep 19 13:40:29.651: INFO: Pod "exec-volume-test-preprovisionedpv-dqk4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.480373884s
Sep 19 13:40:31.763: INFO: Pod "exec-volume-test-preprovisionedpv-dqk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.592123973s
STEP: Saw pod success
Sep 19 13:40:31.763: INFO: Pod "exec-volume-test-preprovisionedpv-dqk4" satisfied condition "Succeeded or Failed"
Sep 19 13:40:31.886: INFO: Trying to get logs from node ip-172-20-50-204.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-dqk4 container exec-container-preprovisionedpv-dqk4: <nil>
STEP: delete the pod
Sep 19 13:40:32.115: INFO: Waiting for pod exec-volume-test-preprovisionedpv-dqk4 to disappear
Sep 19 13:40:32.226: INFO: Pod exec-volume-test-preprovisionedpv-dqk4 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-dqk4
Sep 19 13:40:32.226: INFO: Deleting pod "exec-volume-test-preprovisionedpv-dqk4" in namespace "volume-6922"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 19 13:40:33.665: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 39553 lines ...






crd-webhook-5924/sample-crd-conversion-webhook-deployment-b49d8b4cf-jngff\" objectUID=18417f7c-7293-47d7-924a-85772b91a9c2 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:15.532500       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" need=2 deleting=1\nI0919 13:41:15.532537       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" relatedReplicaSets=[test-rolling-update-with-lb-9c68d7c8b test-rolling-update-with-lb-8c8cdc96d]\nI0919 13:41:15.532825       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-8c8cdc96d\" pod=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-x8cfd\"\nI0919 13:41:15.540480       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-8c8cdc96d to 2\"\nI0919 13:41:15.562042       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" need=2 creating=1\nI0919 13:41:15.568961       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-x8cfd\" objectUID=556b73d3-d430-463b-9326-dbf9eb475e27 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:15.570057       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-9c68d7c8b to 2\"\nI0919 13:41:15.575927       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-8c8cdc96d-x8cfd\"\nI0919 13:41:15.576539       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-x8cfd\" objectUID=556b73d3-d430-463b-9326-dbf9eb475e27 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:15.577317       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-9c68d7c8b-55ggk\"\nI0919 13:41:15.633139       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:16.064337       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9888/pvc-snr8q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:41:16.193636       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9888/pvc-snr8q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9888\\\" or manually created by system administrator\"\nI0919 13:41:16.193667       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9888/pvc-snr8q\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9888\\\" or manually created by system administrator\"\nI0919 13:41:16.217069       1 pv_controller.go:879] volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" entered phase \"Bound\"\nI0919 13:41:16.217110       1 pv_controller.go:982] volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" bound to claim \"csi-mock-volumes-9888/pvc-snr8q\"\nI0919 13:41:16.228880       1 pv_controller.go:823] claim \"csi-mock-volumes-9888/pvc-snr8q\" entered phase \"Bound\"\nI0919 13:41:16.245383       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-5970/inline-volume-tester-v8kcz\" PVC=\"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-0\"\nI0919 13:41:16.245414       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-0\"\nI0919 13:41:16.246176       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-5970/inline-volume-tester-v8kcz\" PVC=\"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-1\"\nI0919 13:41:16.246198       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-1\"\nI0919 13:41:16.345521       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-abe073ba-40d6-485f-b005-987f6b3cb56d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0194f230f1a147651\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:16.350403       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-abe073ba-40d6-485f-b005-987f6b3cb56d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0194f230f1a147651\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:16.359095       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-9f18d596-d640-41d5-85bb-f3c61907d31c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f775bc6d7ee97bc9\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:16.359465       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-2557^4238fe92-194f-11ec-aeb8-36148c921f80\") from node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:16.362815       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-9f18d596-d640-41d5-85bb-f3c61907d31c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f775bc6d7ee97bc9\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:16.604479       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb8ce8f8f7a80df1\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:41:16.604654       1 event.go:294] \"Event occurred\" object=\"statefulset-6517/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\\\" \"\nI0919 13:41:16.904924       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-2557^4238fe92-194f-11ec-aeb8-36148c921f80\") from node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:16.906155       1 event.go:294] \"Event occurred\" object=\"volumemode-2557/pod-f5825294-b37e-4e0d-8a7f-a6789fb46fb0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\\\" \"\nE0919 13:41:16.981810       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-3691/default: secrets \"default-token-9xsfb\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-3691 because it is being terminated\nI0919 13:41:17.050715       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-0\"\nI0919 13:41:17.056307       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-5970/inline-volume-tester-v8kcz\" objectUID=b51f6b79-1115-4e1d-b6e2-9c09ab5e8042 kind=\"Pod\" virtual=false\nI0919 13:41:17.062246       1 pv_controller.go:640] volume \"pvc-abe073ba-40d6-485f-b005-987f6b3cb56d\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:17.063450       1 garbagecollector.go:599] adding [v1/PersistentVolumeClaim, namespace: ephemeral-5970, name: inline-volume-tester-v8kcz-my-volume-1, uid: 9f18d596-d640-41d5-85bb-f3c61907d31c] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-5970, name: inline-volume-tester-v8kcz, uid: b51f6b79-1115-4e1d-b6e2-9c09ab5e8042] is deletingDependents\nI0919 13:41:17.063789       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-1\" objectUID=9f18d596-d640-41d5-85bb-f3c61907d31c kind=\"PersistentVolumeClaim\" virtual=false\nI0919 13:41:17.069868       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-1\"\nI0919 13:41:17.070224       1 pv_controller.go:879] volume \"pvc-abe073ba-40d6-485f-b005-987f6b3cb56d\" entered phase \"Released\"\nI0919 13:41:17.074413       1 pv_controller.go:1340] isVolumeReleased[pvc-abe073ba-40d6-485f-b005-987f6b3cb56d]: volume is released\nI0919 13:41:17.077603       1 event.go:294] \"Event occurred\" object=\"resourcequota-5202/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:41:17.083805       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-5970/inline-volume-tester-v8kcz\" objectUID=b51f6b79-1115-4e1d-b6e2-9c09ab5e8042 kind=\"Pod\" virtual=false\nI0919 13:41:17.088061       1 pv_controller.go:640] volume \"pvc-9f18d596-d640-41d5-85bb-f3c61907d31c\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:17.088497       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"resourcequota-5202/test-claim\"\nI0919 13:41:17.089737       1 garbagecollector.go:594] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-5970, name: inline-volume-tester-v8kcz, uid: b51f6b79-1115-4e1d-b6e2-9c09ab5e8042]\nI0919 13:41:17.095728       1 pv_controller.go:879] volume \"pvc-9f18d596-d640-41d5-85bb-f3c61907d31c\" entered phase \"Released\"\nI0919 13:41:17.106768       1 pv_controller.go:1340] isVolumeReleased[pvc-9f18d596-d640-41d5-85bb-f3c61907d31c]: volume is released\nI0919 13:41:17.210343       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9888^4\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:17.525394       1 namespace_controller.go:185] Namespace has been deleted volume-expand-30\nI0919 13:41:17.761610       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9888^4\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:17.761928       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9888/pvc-volume-tester-v6tvd\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\\\" \"\nE0919 13:41:17.856426       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-5973/default: secrets \"default-token-27q48\" is forbidden: unable to create new content in namespace ephemeral-5973 because it is being terminated\nI0919 13:41:17.945138       1 namespace_controller.go:185] Namespace has been deleted cronjob-2529\nI0919 13:41:18.223995       1 namespace_controller.go:185] Namespace has been deleted endpointslice-9943\nI0919 13:41:18.479052       1 pv_controller.go:879] volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" entered phase \"Bound\"\nI0919 13:41:18.479195       1 pv_controller.go:982] volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" bound to claim \"provisioning-293/csi-hostpath6gj8x\"\nI0919 13:41:18.485935       1 pv_controller.go:823] claim \"provisioning-293/csi-hostpath6gj8x\" entered phase \"Bound\"\nI0919 13:41:18.733165       1 namespace_controller.go:185] Namespace has been deleted volumemode-9901\nI0919 13:41:18.856111       1 garbagecollector.go:475] \"Processing object\" object=\"volume-expand-30-2872/csi-hostpathplugin-5f9f8f5474\" objectUID=35196367-f28f-493c-821d-c8c63afa6a38 kind=\"ControllerRevision\" virtual=false\nI0919 13:41:18.856515       1 stateful_set.go:440] StatefulSet has been deleted volume-expand-30-2872/csi-hostpathplugin\nI0919 13:41:18.856681       1 garbagecollector.go:475] \"Processing object\" object=\"volume-expand-30-2872/csi-hostpathplugin-0\" objectUID=a7d0b5d2-cfe6-46ac-8081-df8d9623b88b kind=\"Pod\" virtual=false\nI0919 13:41:18.859608       1 garbagecollector.go:584] \"Deleting object\" object=\"volume-expand-30-2872/csi-hostpathplugin-5f9f8f5474\" objectUID=35196367-f28f-493c-821d-c8c63afa6a38 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:41:18.860064       1 garbagecollector.go:584] \"Deleting object\" object=\"volume-expand-30-2872/csi-hostpathplugin-0\" objectUID=a7d0b5d2-cfe6-46ac-8081-df8d9623b88b kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:19.074637       1 event.go:294] \"Event occurred\" object=\"volume-expand-9310/aws8w88s\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:41:19.082882       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-9310/aws8w88s\"\nI0919 13:41:19.545852       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-293^448ba737-194f-11ec-a3a7-1ee7b3b50a4f\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:19.705913       1 namespace_controller.go:185] Namespace has been deleted volumemode-638\nI0919 13:41:19.798239       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-3917/test-new-deployment-5c557bc5bf\" need=1 creating=1\nI0919 13:41:19.799102       1 event.go:294] \"Event occurred\" object=\"deployment-3917/test-new-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-new-deployment-5c557bc5bf to 1\"\nI0919 13:41:19.806567       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-3917/test-new-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-new-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:19.810441       1 event.go:294] \"Event occurred\" object=\"deployment-3917/test-new-deployment-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-new-deployment-5c557bc5bf-hlpc8\"\nE0919 13:41:19.950019       1 tokens_controller.go:262] error synchronizing serviceaccount crd-webhook-5924/default: secrets \"default-token-kbr96\" is forbidden: unable to create new content in namespace crd-webhook-5924 because it is being terminated\nI0919 13:41:20.089851       1 event.go:294] \"Event occurred\" object=\"provisioning-293/pod-subpath-test-dynamicpv-ddnj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\\\" \"\nI0919 13:41:20.089887       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-293^448ba737-194f-11ec-a3a7-1ee7b3b50a4f\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:22.111144       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3691\nI0919 13:41:22.542589       1 pv_controller.go:930] claim \"volume-964/pvc-rb8nx\" bound to volume \"local-nzc9j\"\nI0919 13:41:22.547145       1 pv_controller.go:1340] isVolumeReleased[pvc-9f18d596-d640-41d5-85bb-f3c61907d31c]: volume is released\nI0919 13:41:22.547993       1 pv_controller.go:1340] isVolumeReleased[pvc-abe073ba-40d6-485f-b005-987f6b3cb56d]: volume is released\nI0919 13:41:22.554096       1 pv_controller.go:879] volume \"local-nzc9j\" entered phase \"Bound\"\nI0919 13:41:22.554282       1 pv_controller.go:982] volume \"local-nzc9j\" bound to claim \"volume-964/pvc-rb8nx\"\nI0919 13:41:22.562481       1 pv_controller.go:823] claim \"volume-964/pvc-rb8nx\" entered phase \"Bound\"\nI0919 13:41:23.081091       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5973\nI0919 13:41:23.213644       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"ebs.csi.aws.com-vol-0beeaf33061e36728\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0beeaf33061e36728\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:23.317589       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"disruption-2657/rs\" need=10 creating=1\nI0919 13:41:23.330072       1 event.go:294] \"Event occurred\" object=\"disruption-2657/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-687kr\"\nI0919 13:41:23.471199       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-5973-4693/csi-hostpathplugin-6d77d5c7fc\" objectUID=3d7c5a1f-2e8f-4aa5-89f6-1d16217fc596 kind=\"ControllerRevision\" virtual=false\nI0919 13:41:23.471635       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-5973-4693/csi-hostpathplugin\nI0919 13:41:23.471787       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-5973-4693/csi-hostpathplugin-0\" objectUID=784fbfd2-3914-4530-9927-c4a53b856abc kind=\"Pod\" virtual=false\nI0919 13:41:23.481493       1 garbagecollector.go:584] \"Deleting object\" object=\"ephemeral-5973-4693/csi-hostpathplugin-6d77d5c7fc\" objectUID=3d7c5a1f-2e8f-4aa5-89f6-1d16217fc596 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:41:23.484953       1 garbagecollector.go:584] \"Deleting object\" object=\"ephemeral-5973-4693/csi-hostpathplugin-0\" objectUID=784fbfd2-3914-4530-9927-c4a53b856abc kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:23.915134       1 namespace_controller.go:185] Namespace has been deleted kubectl-8062\nE0919 13:41:24.236924       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-30-2872/default: secrets \"default-token-g6579\" is forbidden: unable to create new content in namespace volume-expand-30-2872 because it is being terminated\nI0919 13:41:24.451210       1 namespace_controller.go:185] Namespace has been deleted secrets-1622\nI0919 13:41:24.673172       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-5202/test-quota\nE0919 13:41:24.801222       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-9310/default: secrets \"default-token-hspdg\" is forbidden: unable to create new content in namespace volume-expand-9310 because it is being terminated\nI0919 13:41:25.060006       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-5924\nI0919 13:41:25.137387       1 pv_controller_base.go:521] deletion of claim \"volume-9791/pvc-9vngm\" was already processed\nI0919 13:41:25.808904       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-5l8d7\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0314bd4ed5e46a716\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nE0919 13:41:26.022343       1 tokens_controller.go:262] error synchronizing serviceaccount volume-4093/default: secrets \"default-token-cvtn5\" is forbidden: unable to create new content in namespace volume-4093 because it is being terminated\nI0919 13:41:26.371894       1 namespace_controller.go:185] Namespace has been deleted secrets-5603\nE0919 13:41:26.724799       1 tokens_controller.go:262] error synchronizing serviceaccount discovery-4053/default: secrets \"default-token-44vnd\" is forbidden: unable to create new content in namespace discovery-4053 because it is being terminated\nI0919 13:41:26.953579       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-3917/test-new-deployment-5c557bc5bf\" objectUID=23c0f07c-27d7-49ba-8213-c88b0458bdaa kind=\"ReplicaSet\" virtual=false\nI0919 13:41:26.953818       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-801/pvc-zwrj2\"\nI0919 13:41:26.953880       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-3917/test-new-deployment\"\nI0919 13:41:26.956681       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-3917/test-new-deployment-5c557bc5bf\" objectUID=23c0f07c-27d7-49ba-8213-c88b0458bdaa kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:41:26.961074       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-3917/test-new-deployment-5c557bc5bf-hlpc8\" objectUID=f30c3183-6cf0-4903-a436-6a2dcd6182c6 kind=\"Pod\" virtual=false\nI0919 13:41:26.962845       1 pv_controller.go:640] volume \"local-4xjgw\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:41:26.963245       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-3917/test-new-deployment-5c557bc5bf-hlpc8\" objectUID=f30c3183-6cf0-4903-a436-6a2dcd6182c6 kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:26.966281       1 pv_controller.go:879] volume \"local-4xjgw\" entered phase \"Released\"\nI0919 13:41:26.972171       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-3917/test-new-deployment-5c557bc5bf-hlpc8\" objectUID=ff34895b-ca1b-4665-9c6b-a3996c5e7ca0 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:26.976080       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-3917/test-new-deployment-5c557bc5bf-hlpc8\" objectUID=ff34895b-ca1b-4665-9c6b-a3996c5e7ca0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:27.066743       1 pv_controller_base.go:521] deletion of claim \"volumemode-801/pvc-zwrj2\" was already processed\nI0919 13:41:28.236798       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-129\nI0919 13:41:28.921781       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-qr8tw\" objectUID=e879f21c-4db6-4ff9-9d9f-11d5855f07aa kind=\"Pod\" virtual=false\nI0919 13:41:28.922400       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-kg7mj\" objectUID=36628c1a-df22-4b8a-b2f1-26b2f44cb14e kind=\"Pod\" virtual=false\nI0919 13:41:28.923517       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-wm9m4\" objectUID=c3237dba-6a88-42af-b8c7-419db3a24989 kind=\"Pod\" virtual=false\nI0919 13:41:28.924090       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-8gbx7\" objectUID=6121f11a-ab7c-4f2c-9214-a984f649ef74 kind=\"Pod\" virtual=false\nI0919 13:41:28.925177       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-qr8tw\" objectUID=e879f21c-4db6-4ff9-9d9f-11d5855f07aa kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.926407       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-jdv55\" objectUID=b0540da5-d35f-49bd-ab0f-290b5fe6535d kind=\"Pod\" virtual=false\nI0919 13:41:28.926681       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-4fwpx\" objectUID=897f7cd0-fb45-4884-8030-ed9bcd6e9e36 kind=\"Pod\" virtual=false\nI0919 13:41:28.927179       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-jp92w\" objectUID=f3e4f982-9c1f-4d11-9f01-902b1a1a904f kind=\"Pod\" virtual=false\nI0919 13:41:28.928616       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-8gbx7\" objectUID=6121f11a-ab7c-4f2c-9214-a984f649ef74 kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.931037       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-vff9b\" objectUID=adec4073-bcff-43c1-ad72-107be423b742 kind=\"Pod\" virtual=false\nI0919 13:41:28.931284       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-687kr\" objectUID=28ea5e66-ce7b-4651-9777-2d342ae932e0 kind=\"Pod\" virtual=false\nI0919 13:41:28.932104       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-kg7mj\" objectUID=36628c1a-df22-4b8a-b2f1-26b2f44cb14e kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.932529       1 garbagecollector.go:475] \"Processing object\" object=\"disruption-2657/rs-dlzhb\" objectUID=e732b8c1-6f64-4d41-9404-753696148265 kind=\"Pod\" virtual=false\nI0919 13:41:28.934405       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-wm9m4\" objectUID=c3237dba-6a88-42af-b8c7-419db3a24989 kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.934880       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-jdv55\" objectUID=b0540da5-d35f-49bd-ab0f-290b5fe6535d kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.935942       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-4fwpx\" objectUID=897f7cd0-fb45-4884-8030-ed9bcd6e9e36 kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.941840       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-687kr\" objectUID=28ea5e66-ce7b-4651-9777-2d342ae932e0 kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.943828       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-jp92w\" objectUID=f3e4f982-9c1f-4d11-9f01-902b1a1a904f kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:28.944088       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-vff9b\" objectUID=adec4073-bcff-43c1-ad72-107be423b742 kind=\"Pod\" propagationPolicy=Background\nE0919 13:41:28.948079       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:41:28.953467       1 garbagecollector.go:584] \"Deleting object\" object=\"disruption-2657/rs-dlzhb\" objectUID=e732b8c1-6f64-4d41-9404-753696148265 kind=\"Pod\" propagationPolicy=Background\nE0919 13:41:29.026456       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-1496/default: serviceaccounts \"default\" not found\nI0919 13:41:29.192069       1 event.go:294] \"Event occurred\" object=\"volume-expand-1257/aws8ttsj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:41:29.264952       1 pv_controller.go:1340] isVolumeReleased[pvc-9f18d596-d640-41d5-85bb-f3c61907d31c]: volume is released\nI0919 13:41:29.265032       1 pv_controller.go:1340] isVolumeReleased[pvc-abe073ba-40d6-485f-b005-987f6b3cb56d]: volume is released\nI0919 13:41:29.393678       1 pv_controller_base.go:521] deletion of claim \"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-0\" was already processed\nI0919 13:41:29.403620       1 pv_controller_base.go:521] deletion of claim \"ephemeral-5970/inline-volume-tester-v8kcz-my-volume-1\" was already processed\nI0919 13:41:29.429064       1 event.go:294] \"Event occurred\" object=\"volume-expand-1257/aws8ttsj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0919 13:41:29.768672       1 namespace_controller.go:185] Namespace has been deleted ingress-6615\nI0919 13:41:29.863504       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-9f18d596-d640-41d5-85bb-f3c61907d31c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0f775bc6d7ee97bc9\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:29.938064       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-abe073ba-40d6-485f-b005-987f6b3cb56d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0194f230f1a147651\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:29.967477       1 namespace_controller.go:185] Namespace has been deleted volume-expand-30-2872\nI0919 13:41:30.017389       1 namespace_controller.go:185] Namespace has been deleted volume-expand-9310\nI0919 13:41:30.017389       1 namespace_controller.go:185] Namespace has been deleted resourcequota-5202\nI0919 13:41:30.132253       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4571/pod-ddfac904-82b5-4026-bdf5-7fce4115fb7c\" PVC=\"persistent-local-volumes-test-4571/pvc-w94sl\"\nI0919 13:41:30.132281       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4571/pvc-w94sl\"\nI0919 13:41:30.166287       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-2557^4238fe92-194f-11ec-aeb8-36148c921f80\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:30.197690       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-2557^4238fe92-194f-11ec-aeb8-36148c921f80\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:30.458486       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" need=1 deleting=1\nI0919 13:41:30.458522       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" relatedReplicaSets=[test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b]\nI0919 13:41:30.459266       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-8c8cdc96d to 1\"\nI0919 13:41:30.459421       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-8c8cdc96d\" pod=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-c7xlv\"\nI0919 13:41:30.470682       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-c7xlv\" objectUID=509dfd2a-1583-4b36-b005-be34c4aa3211 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:30.473536       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-8c8cdc96d-c7xlv\"\nI0919 13:41:30.481714       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" need=3 creating=1\nI0919 13:41:30.486270       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-9c68d7c8b to 3\"\nI0919 13:41:30.495249       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-c7xlv\" objectUID=509dfd2a-1583-4b36-b005-be34c4aa3211 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:30.496063       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-9c68d7c8b-wwl7f\"\nI0919 13:41:30.496276       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:41:30.601721       1 tokens_controller.go:262] error synchronizing serviceaccount tables-1447/default: secrets \"default-token-wdtwk\" is forbidden: unable to create new content in namespace tables-1447 because it is being terminated\nI0919 13:41:30.719184       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volumemode-2557^4238fe92-194f-11ec-aeb8-36148c921f80\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:30.831767       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0919 13:41:31.062146       1 namespace_controller.go:185] Namespace has been deleted volume-4093\nE0919 13:41:31.220800       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4527/default: secrets \"default-token-gpmkg\" is forbidden: unable to create new content in namespace kubectl-4527 because it is being terminated\nI0919 13:41:31.776984       1 namespace_controller.go:185] Namespace has been deleted discovery-4053\nI0919 13:41:31.849057       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-6797, name: inline-volume-tester-sskrf, uid: 46294d18-8882-426c-ae4e-2508c66c7e6b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0919 13:41:31.849661       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf\" objectUID=05a3d476-2212-45ee-8591-c7a2dde80926 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:31.850006       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf\" objectUID=46294d18-8882-426c-ae4e-2508c66c7e6b kind=\"Pod\" virtual=false\nI0919 13:41:31.850337       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\" objectUID=eaf459f8-d69a-4283-b52c-c06a7229131f kind=\"PersistentVolumeClaim\" virtual=false\nI0919 13:41:31.856983       1 garbagecollector.go:599] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-6797, name: inline-volume-tester-sskrf, uid: 05a3d476-2212-45ee-8591-c7a2dde80926] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6797, name: inline-volume-tester-sskrf, uid: 46294d18-8882-426c-ae4e-2508c66c7e6b] is deletingDependents\nI0919 13:41:31.857008       1 garbagecollector.go:599] adding [v1/PersistentVolumeClaim, namespace: ephemeral-6797, name: inline-volume-tester-sskrf-my-volume-0, uid: eaf459f8-d69a-4283-b52c-c06a7229131f] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6797, name: inline-volume-tester-sskrf, uid: 46294d18-8882-426c-ae4e-2508c66c7e6b] is deletingDependents\nI0919 13:41:31.858637       1 garbagecollector.go:584] \"Deleting object\" object=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\" objectUID=eaf459f8-d69a-4283-b52c-c06a7229131f kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0919 13:41:31.859926       1 garbagecollector.go:584] \"Deleting object\" object=\"ephemeral-6797/inline-volume-tester-sskrf\" objectUID=05a3d476-2212-45ee-8591-c7a2dde80926 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:31.863562       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-6797/inline-volume-tester-sskrf\" PVC=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\"\nI0919 13:41:31.863598       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\"\nI0919 13:41:31.863652       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\" objectUID=eaf459f8-d69a-4283-b52c-c06a7229131f kind=\"PersistentVolumeClaim\" virtual=false\nI0919 13:41:31.864943       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf\" objectUID=46294d18-8882-426c-ae4e-2508c66c7e6b kind=\"Pod\" virtual=false\nI0919 13:41:31.866830       1 garbagecollector.go:599] adding [v1/PersistentVolumeClaim, namespace: ephemeral-6797, name: inline-volume-tester-sskrf-my-volume-0, uid: eaf459f8-d69a-4283-b52c-c06a7229131f] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6797, name: inline-volume-tester-sskrf, uid: 46294d18-8882-426c-ae4e-2508c66c7e6b] is deletingDependents\nI0919 13:41:31.866866       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\" objectUID=eaf459f8-d69a-4283-b52c-c06a7229131f kind=\"PersistentVolumeClaim\" virtual=false\nI0919 13:41:31.866897       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf\" objectUID=05a3d476-2212-45ee-8591-c7a2dde80926 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:31.934053       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-2557/csi-hostpath9sp59\"\nI0919 13:41:31.940481       1 pv_controller.go:640] volume \"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:31.945051       1 pv_controller.go:879] volume \"pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0\" entered phase \"Released\"\nI0919 13:41:31.947382       1 pv_controller.go:1340] isVolumeReleased[pvc-1e0dea7f-dd6c-4eb8-a854-67469d78a3b0]: volume is released\nI0919 13:41:31.998965       1 pv_controller_base.go:521] deletion of claim \"volumemode-2557/csi-hostpath9sp59\" was already processed\nE0919 13:41:32.093045       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-4236/pvc-cn4qk: storageclass.storage.k8s.io \"volume-4236\" not found\nI0919 13:41:32.093402       1 event.go:294] \"Event occurred\" object=\"volume-4236/pvc-cn4qk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-4236\\\" not found\"\nI0919 13:41:32.205146       1 pv_controller.go:879] volume \"local-2fx9l\" entered phase \"Available\"\nI0919 13:41:32.571998       1 garbagecollector.go:475] \"Processing object\" object=\"statefulset-6517/ss\" objectUID=f117c7c2-4205-4efe-8cb0-f118bc8a0f40 kind=\"StatefulSet\" virtual=false\nI0919 13:41:32.577015       1 garbagecollector.go:514] object [apps/v1/StatefulSet, namespace: statefulset-6517, name: ss, uid: f117c7c2-4205-4efe-8cb0-f118bc8a0f40]'s doesn't have an owner, continue on next item\nE0919 13:41:32.577520       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:41:32.709079       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" need=40 creating=40\nI0919 13:41:32.726941       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-7lcvs\"\nI0919 13:41:32.739424       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-cmnb4\"\nI0919 13:41:32.753698       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-dm56r\"\nI0919 13:41:32.770596       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-sgxpl\"\nI0919 13:41:32.780538       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-vxrdq\"\nI0919 13:41:32.780562       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-wdz6f\"\nI0919 13:41:32.780573       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-ll4ch\"\nI0919 13:41:32.810080       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-t8v7m\"\nI0919 13:41:32.812240       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-8rc67\"\nI0919 13:41:32.813263       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-zzbcp\"\nI0919 13:41:32.822086       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-9nkd9\"\nI0919 13:41:32.822502       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-dzrmm\"\nI0919 13:41:32.822656       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-l9wlh\"\nI0919 13:41:32.822796       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-ldsf9\"\nI0919 13:41:32.823213       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-tlv5p\"\nI0919 13:41:32.852465       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-zvs22\"\nI0919 13:41:32.852678       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-j94nk\"\nI0919 13:41:32.857934       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-5qhr4\"\nI0919 13:41:32.858379       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-5ms9b\"\nI0919 13:41:32.858498       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-9bv4v\"\nI0919 13:41:32.858589       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-dkj5g\"\nI0919 13:41:32.858671       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-d6wpd\"\nI0919 13:41:32.858749       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-wf5dr\"\nI0919 13:41:32.858844       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-87c52\"\nI0919 13:41:32.858921       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-5svqb\"\nI0919 13:41:32.858985       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-2g7qj\"\nI0919 13:41:32.871048       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-1755/inline-volume-tester-5t8lc\" PVC=\"ephemeral-1755/inline-volume-tester-5t8lc-my-volume-0\"\nI0919 13:41:32.871178       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-1755/inline-volume-tester-5t8lc-my-volume-0\"\nI0919 13:41:32.890600       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-8hwr9\"\nI0919 13:41:32.915231       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-2cdw2\"\nI0919 13:41:32.943032       1 pv_controller.go:879] volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" entered phase \"Bound\"\nI0919 13:41:32.943160       1 pv_controller.go:982] volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" bound to claim \"volume-expand-1257/aws8ttsj\"\nI0919 13:41:32.957866       1 pv_controller.go:823] claim \"volume-expand-1257/aws8ttsj\" entered phase \"Bound\"\nI0919 13:41:32.963409       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-qjpqt\"\nE0919 13:41:32.999474       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-3917/default: secrets \"default-token-bbx68\" is forbidden: unable to create new content in namespace deployment-3917 because it is being terminated\nI0919 13:41:33.016576       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-ppkqd\"\nI0919 13:41:33.067120       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-r5ldt\"\nI0919 13:41:33.164645       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-b2w8w\"\nI0919 13:41:33.214090       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-qlzkr\"\nI0919 13:41:33.262572       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-xwc7t\"\nI0919 13:41:33.314398       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-hxx57\"\nI0919 13:41:33.363225       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-mw7sc\"\nI0919 13:41:33.384689       1 namespace_controller.go:185] Namespace has been deleted init-container-7866\nI0919 13:41:33.400276       1 controller_ref_manager.go:232] patching pod statefulset-6517_ss-0 to remove its controllerRef to apps/v1/StatefulSet:ss\nI0919 13:41:33.410385       1 garbagecollector.go:475] \"Processing object\" object=\"statefulset-6517/ss\" objectUID=f117c7c2-4205-4efe-8cb0-f118bc8a0f40 kind=\"StatefulSet\" virtual=false\nI0919 13:41:33.418264       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-q5g9z\"\nI0919 13:41:33.419449       1 garbagecollector.go:514] object [apps/v1/StatefulSet, namespace: statefulset-6517, name: ss, uid: f117c7c2-4205-4efe-8cb0-f118bc8a0f40]'s doesn't have an owner, continue on next item\nE0919 13:41:33.431936       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nE0919 13:41:33.440243       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nE0919 13:41:33.454028       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nE0919 13:41:33.466108       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nI0919 13:41:33.466338       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-qx5sr\"\nE0919 13:41:33.481350       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nI0919 13:41:33.495910       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05a0f3ecbf9ddc4a6\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:41:33.517791       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-xlxts\"\nE0919 13:41:33.538988       1 tokens_controller.go:262] error synchronizing serviceaccount volume-9791/default: secrets \"default-token-j6p4b\" is forbidden: unable to create new content in namespace volume-9791 because it is being terminated\nI0919 13:41:33.566268       1 event.go:294] \"Event occurred\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: cleanup40-56605450-4c51-4138-88be-2583759cd383-69b4z\"\nE0919 13:41:33.580911       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nI0919 13:41:33.641234       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-1755/inline-volume-tester-5t8lc-my-volume-0\"\nI0919 13:41:33.648676       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-1755/inline-volume-tester-5t8lc\" objectUID=93caabd5-4da7-4fd6-b75e-d6e021885fb5 kind=\"Pod\" virtual=false\nI0919 13:41:33.650838       1 garbagecollector.go:594] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-1755, name: inline-volume-tester-5t8lc, uid: 93caabd5-4da7-4fd6-b75e-d6e021885fb5]\nI0919 13:41:33.651180       1 pv_controller.go:640] volume \"pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:33.656064       1 pv_controller.go:879] volume \"pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538\" entered phase \"Released\"\nI0919 13:41:33.658696       1 pv_controller.go:1340] isVolumeReleased[pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538]: volume is released\nE0919 13:41:33.746818       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nE0919 13:41:34.074133       1 stateful_set.go:413] error syncing StatefulSet statefulset-6517/ss, requeuing: pods \"ss-0\" already exists, the server was not able to generate a unique name for the object\nI0919 13:41:34.189076       1 namespace_controller.go:185] Namespace has been deleted downward-api-1496\nI0919 13:41:34.217268       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5973-4693\nI0919 13:41:34.527563       1 event.go:294] \"Event occurred\" object=\"volume-expand-8303/awsqhgrf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:41:34.711223       1 stateful_set_control.go:521] StatefulSet statefulset-6517/ss terminating Pod ss-0 for scale down\nI0919 13:41:34.722151       1 event.go:294] \"Event occurred\" object=\"statefulset-6517/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0919 13:41:34.979683       1 namespace_controller.go:185] Namespace has been deleted replicaset-5310\nE0919 13:41:35.502534       1 namespace_controller.go:162] deletion of namespace webhook-6232 failed: unexpected items still remain in namespace: webhook-6232 for gvr: /v1, Resource=pods\nI0919 13:41:35.722521       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05a0f3ecbf9ddc4a6\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:41:35.722708       1 event.go:294] \"Event occurred\" object=\"volume-expand-1257/pod-127f83ee-b1f9-4abf-b3f6-caaf54305765\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\\\" \"\nI0919 13:41:35.743354       1 namespace_controller.go:185] Namespace has been deleted tables-1447\nI0919 13:41:36.317514       1 namespace_controller.go:185] Namespace has been deleted kubectl-4527\nI0919 13:41:37.252754       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" need=0 deleting=1\nI0919 13:41:37.252849       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" relatedReplicaSets=[test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b]\nI0919 13:41:37.252940       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-8c8cdc96d\" pod=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-n57lq\"\nI0919 13:41:37.255967       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-8c8cdc96d to 0\"\nI0919 13:41:37.281005       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-n57lq\" objectUID=916325d8-9ab6-4c5f-84cc-d2c3f4a560f3 kind=\"CiliumEndpoint\" virtual=false\nW0919 13:41:37.282498       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-6925/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0919 13:41:37.283801       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-8c8cdc96d-n57lq\"\nI0919 13:41:37.284179       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:37.290731       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-8c8cdc96d-n57lq\" objectUID=916325d8-9ab6-4c5f-84cc-d2c3f4a560f3 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0919 13:41:37.335048       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-2557/default: secrets \"default-token-p2f6q\" is forbidden: unable to create new content in namespace volumemode-2557 because it is being terminated\nI0919 13:41:37.500778       1 garbagecollector.go:475] \"Processing object\" object=\"container-probe-1184/test-webserver-9bd1d179-84dd-456e-81ba-31b17db85803\" objectUID=3cab4a4d-156c-4afe-9544-03b6eaad0de1 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:37.504050       1 garbagecollector.go:584] \"Deleting object\" object=\"container-probe-1184/test-webserver-9bd1d179-84dd-456e-81ba-31b17db85803\" objectUID=3cab4a4d-156c-4afe-9544-03b6eaad0de1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:37.543127       1 pv_controller.go:930] claim \"volume-4236/pvc-cn4qk\" bound to volume \"local-2fx9l\"\nI0919 13:41:37.547085       1 pv_controller.go:1340] isVolumeReleased[pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538]: volume is released\nI0919 13:41:37.550336       1 pv_controller.go:879] volume \"local-2fx9l\" entered phase \"Bound\"\nI0919 13:41:37.550371       1 pv_controller.go:982] volume \"local-2fx9l\" bound to claim \"volume-4236/pvc-cn4qk\"\nI0919 13:41:37.559476       1 pv_controller.go:823] claim \"volume-4236/pvc-cn4qk\" entered phase \"Bound\"\nI0919 13:41:37.560466       1 event.go:294] \"Event occurred\" object=\"volume-expand-8303/awsqhgrf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0919 13:41:38.036653       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-4571/default: secrets \"default-token-gfx8k\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4571 because it is being terminated\nI0919 13:41:38.133864       1 namespace_controller.go:185] Namespace has been deleted deployment-3917\nI0919 13:41:38.152115       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09e909a2d04479bfd\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:41:38.154816       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09e909a2d04479bfd\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:41:38.241344       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4571/pod-ddfac904-82b5-4026-bdf5-7fce4115fb7c\" PVC=\"persistent-local-volumes-test-4571/pvc-w94sl\"\nI0919 13:41:38.241424       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4571/pvc-w94sl\"\nI0919 13:41:38.439993       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-4571/pod-ddfac904-82b5-4026-bdf5-7fce4115fb7c\" PVC=\"persistent-local-volumes-test-4571/pvc-w94sl\"\nI0919 13:41:38.440022       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-4571/pvc-w94sl\"\nI0919 13:41:38.450682       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-4571/pvc-w94sl\"\nI0919 13:41:38.462474       1 pv_controller.go:640] volume \"local-pvnglfp\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:41:38.466895       1 pv_controller.go:879] volume \"local-pvnglfp\" entered phase \"Released\"\nI0919 13:41:38.471275       1 pv_controller_base.go:521] deletion of claim \"persistent-local-volumes-test-4571/pvc-w94sl\" was already processed\nI0919 13:41:38.598905       1 namespace_controller.go:185] Namespace has been deleted volume-9791\nE0919 13:41:38.664411       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-5970/default: secrets \"default-token-h97f4\" is forbidden: unable to create new content in namespace ephemeral-5970 because it is being terminated\nE0919 13:41:38.785200       1 tokens_controller.go:262] error synchronizing serviceaccount downward-api-7638/default: secrets \"default-token-nlwfb\" is forbidden: unable to create new content in namespace downward-api-7638 because it is being terminated\nI0919 13:41:38.984047       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-945c6c889 to 1\"\nI0919 13:41:38.984275       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" need=1 creating=1\nI0919 13:41:38.994717       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-945c6c889-jfkk5\"\nI0919 13:41:38.998110       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:41:39.363234       1 resource_quota_controller.go:253] Operation cannot be fulfilled on resourcequotas \"quota-not-besteffort\": the object has been modified; please apply your changes to the latest version and try again\nE0919 13:41:39.815243       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:41:40.320394       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE0919 13:41:40.433566       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-1102/default: secrets \"default-token-k2vh9\" is forbidden: unable to create new content in namespace configmap-1102 because it is being terminated\nI0919 13:41:40.612903       1 namespace_controller.go:185] Namespace has been deleted volumemode-801\nI0919 13:41:40.858975       1 namespace_controller.go:185] Namespace has been deleted nettest-1907\nE0919 13:41:41.081894       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:41:41.311533       1 namespace_controller.go:185] Namespace has been deleted volumemode-4172\nI0919 13:41:42.487047       1 namespace_controller.go:185] Namespace has been deleted volumemode-2557\nI0919 13:41:42.587606       1 stateful_set.go:440] StatefulSet has been deleted volumemode-2557-1216/csi-hostpathplugin\nI0919 13:41:42.587774       1 garbagecollector.go:475] \"Processing object\" object=\"volumemode-2557-1216/csi-hostpathplugin-0\" objectUID=3d565b6b-28a6-45a4-b62c-84267c7d484e kind=\"Pod\" virtual=false\nI0919 13:41:42.588012       1 garbagecollector.go:475] \"Processing object\" object=\"volumemode-2557-1216/csi-hostpathplugin-75956cc9f5\" objectUID=7975127b-d03c-4589-83d2-6ccbf952cdef kind=\"ControllerRevision\" virtual=false\nI0919 13:41:42.602177       1 garbagecollector.go:584] \"Deleting object\" object=\"volumemode-2557-1216/csi-hostpathplugin-75956cc9f5\" objectUID=7975127b-d03c-4589-83d2-6ccbf952cdef kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:41:42.602664       1 garbagecollector.go:584] \"Deleting object\" object=\"volumemode-2557-1216/csi-hostpathplugin-0\" objectUID=3d565b6b-28a6-45a4-b62c-84267c7d484e kind=\"Pod\" propagationPolicy=Background\nI0919 13:41:43.734502       1 namespace_controller.go:185] Namespace has been deleted ephemeral-5970\nI0919 13:41:43.837454       1 namespace_controller.go:185] Namespace has been deleted emptydir-3542\nI0919 13:41:43.913989       1 namespace_controller.go:185] Namespace has been deleted downward-api-7638\nI0919 13:41:44.314565       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb8ce8f8f7a80df1\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:41:44.316562       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb8ce8f8f7a80df1\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:41:45.145305       1 garbagecollector.go:475] \"Processing object\" object=\"statefulset-6517/ss-6bf6f6649c\" objectUID=c859635e-ba9e-4b16-a6d8-72c81941cca9 kind=\"ControllerRevision\" virtual=false\nI0919 13:41:45.145441       1 stateful_set.go:440] StatefulSet has been deleted statefulset-6517/ss\nI0919 13:41:45.148325       1 garbagecollector.go:584] \"Deleting object\" object=\"statefulset-6517/ss-6bf6f6649c\" objectUID=c859635e-ba9e-4b16-a6d8-72c81941cca9 kind=\"ControllerRevision\" propagationPolicy=Background\nE0919 13:41:45.328295       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nI0919 13:41:45.365390       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-6517/datadir-ss-0\"\nI0919 13:41:45.374897       1 pv_controller.go:640] volume \"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:45.379871       1 pv_controller.go:879] volume \"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\" entered phase \"Released\"\nI0919 13:41:45.384534       1 pv_controller.go:1340] isVolumeReleased[pvc-7e218e85-468a-4a48-beb7-84326edf5bc3]: volume is released\nE0919 13:41:45.468300       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nI0919 13:41:45.516791       1 namespace_controller.go:185] Namespace has been deleted configmap-1102\nE0919 13:41:45.598044       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nE0919 13:41:45.739009       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nE0919 13:41:45.891023       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nE0919 13:41:46.078774       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nE0919 13:41:46.373201       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nE0919 13:41:46.792208       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-6805/default: secrets \"default-token-8w2gs\" is forbidden: unable to create new content in namespace resourcequota-6805 because it is being terminated\nI0919 13:41:46.815107       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-6805/quota-besteffort\nI0919 13:41:46.817536       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-6805/quota-not-besteffort\nE0919 13:41:46.886694       1 namespace_controller.go:162] deletion of namespace cronjob-316 failed: unexpected items still remain in namespace: cronjob-316 for gvr: /v1, Resource=pods\nI0919 13:41:47.931396       1 namespace_controller.go:185] Namespace has been deleted security-context-2960\nI0919 13:41:47.933947       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-2565/aws58nz6\"\nI0919 13:41:47.956642       1 pv_controller.go:640] volume \"pvc-d632635d-4f5f-404c-a413-7e50184f212c\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:47.966042       1 pv_controller.go:879] volume \"pvc-d632635d-4f5f-404c-a413-7e50184f212c\" entered phase \"Released\"\nI0919 13:41:47.978089       1 pv_controller.go:1340] isVolumeReleased[pvc-d632635d-4f5f-404c-a413-7e50184f212c]: volume is released\nE0919 13:41:47.981084       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-2557-1216/default: serviceaccounts \"default\" not found\nI0919 13:41:47.991773       1 namespace_controller.go:185] Namespace has been deleted container-probe-1184\nI0919 13:41:48.125996       1 pv_controller.go:879] volume \"local-pv5qfnl\" entered phase \"Available\"\nE0919 13:41:48.179603       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:41:48.238326       1 pv_controller.go:930] claim \"persistent-local-volumes-test-2069/pvc-ntj6k\" bound to volume \"local-pv5qfnl\"\nI0919 13:41:48.251304       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4571\nI0919 13:41:48.262593       1 pv_controller.go:879] volume \"local-pv5qfnl\" entered phase \"Bound\"\nI0919 13:41:48.262620       1 pv_controller.go:982] volume \"local-pv5qfnl\" bound to claim \"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:41:48.270685       1 pv_controller.go:823] claim \"persistent-local-volumes-test-2069/pvc-ntj6k\" entered phase \"Bound\"\nI0919 13:41:49.667857       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d632635d-4f5f-404c-a413-7e50184f212c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02c938c3ce6a49373\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:49.671074       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-d632635d-4f5f-404c-a413-7e50184f212c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02c938c3ce6a49373\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:50.721438       1 pv_controller.go:1340] isVolumeReleased[pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538]: volume is released\nI0919 13:41:50.919841       1 pv_controller_base.go:521] deletion of claim \"ephemeral-1755/inline-volume-tester-5t8lc-my-volume-0\" was already processed\nI0919 13:41:51.783879       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-022d205f-9ec9-4fd7-ab1f-454333ea9538\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-09e909a2d04479bfd\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:41:51.789404       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb8ce8f8f7a80df1\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:41:51.902187       1 namespace_controller.go:185] Namespace has been deleted resourcequota-6805\nI0919 13:41:51.980224       1 event.go:294] \"Event occurred\" object=\"volume-expand-1257/aws8ttsj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalExpanding\" message=\"CSI migration enabled for kubernetes.io/aws-ebs; waiting for external resizer to expand the pvc\"\nE0919 13:41:52.099970       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:41:52.544507       1 event.go:294] \"Event occurred\" object=\"volume-expand-8303/awsqhgrf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:41:52.548188       1 pv_controller.go:1340] isVolumeReleased[pvc-7e218e85-468a-4a48-beb7-84326edf5bc3]: volume is released\nI0919 13:41:52.548916       1 pv_controller.go:1340] isVolumeReleased[pvc-d632635d-4f5f-404c-a413-7e50184f212c]: volume is released\nE0919 13:41:52.562813       1 pv_protection_controller.go:118] PV pvc-7e218e85-468a-4a48-beb7-84326edf5bc3 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-7e218e85-468a-4a48-beb7-84326edf5bc3\": the object has been modified; please apply your changes to the latest version and try again\nI0919 13:41:52.566679       1 pv_controller_base.go:521] deletion of claim \"statefulset-6517/datadir-ss-0\" was already processed\nI0919 13:41:52.638260       1 namespace_controller.go:185] Namespace has been deleted cronjob-316\nI0919 13:41:53.057928       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0919 13:41:53.148726       1 namespace_controller.go:185] Namespace has been deleted volumemode-2557-1216\nI0919 13:41:53.209753       1 pv_controller.go:1340] isVolumeReleased[pvc-d632635d-4f5f-404c-a413-7e50184f212c]: volume is released\nI0919 13:41:53.289067       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-d632635d-4f5f-404c-a413-7e50184f212c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02c938c3ce6a49373\") on node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:41:53.325781       1 pv_controller_base.go:521] deletion of claim \"volume-2565/aws58nz6\" was already processed\nE0919 13:41:53.743615       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-6187/default: secrets \"default-token-4fhmr\" is forbidden: unable to create new content in namespace provisioning-6187 because it is being terminated\nI0919 13:41:53.791739       1 garbagecollector.go:217] syncing garbage collector with updated resources from discovery (attempt 1): added: [mygroup.example.com/v1beta1, Resource=noxus], removed: []\nI0919 13:41:53.810518       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0919 13:41:53.911774       1 shared_informer.go:247] Caches are synced for garbage collector \nI0919 13:41:53.911914       1 garbagecollector.go:258] synced garbage collector\nE0919 13:41:54.792015       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-421/pvc-82rmx: storageclass.storage.k8s.io \"volume-421\" not found\nI0919 13:41:54.792407       1 event.go:294] \"Event occurred\" object=\"volume-421/pvc-82rmx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-421\\\" not found\"\nI0919 13:41:54.826937       1 namespace_controller.go:185] Namespace has been deleted disruption-2657\nI0919 13:41:54.906650       1 pv_controller.go:879] volume \"local-l8svp\" entered phase \"Available\"\nI0919 13:41:54.964059       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-293/csi-hostpath6gj8x\"\nI0919 13:41:54.970127       1 pv_controller.go:640] volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:54.973643       1 pv_controller.go:879] volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" entered phase \"Released\"\nI0919 13:41:54.976257       1 pv_controller.go:1340] isVolumeReleased[pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c]: volume is released\nI0919 13:41:55.006007       1 pv_controller_base.go:521] deletion of claim \"provisioning-293/csi-hostpath6gj8x\" was already processed\nE0919 13:41:55.570055       1 tokens_controller.go:262] error synchronizing serviceaccount node-lease-test-2735/default: secrets \"default-token-br92w\" is forbidden: unable to create new content in namespace node-lease-test-2735 because it is being terminated\nI0919 13:41:55.661445       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2069/pod-4cc19302-31d4-41dc-9169-af349e21e989\" PVC=\"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:41:55.661642       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:41:55.931668       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9888^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:55.935093       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9888^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:56.119285       1 namespace_controller.go:185] Namespace has been deleted pods-6027\nI0919 13:41:56.390201       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5c557bc5bf to 6\"\nI0919 13:41:56.390370       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=6\nI0919 13:41:56.396623       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:56.400254       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-974ll\"\nI0919 13:41:56.411406       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-2fkg8\"\nI0919 13:41:56.415801       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-9nmmj\"\nI0919 13:41:56.435381       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-fzqq2\"\nI0919 13:41:56.435412       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-64qm6\"\nI0919 13:41:56.435802       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-trx5f\"\nI0919 13:41:56.477400       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9888^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:41:56.621389       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"DeploymentRollbackRevisionNotFound\" message=\"Unable to find last revision.\"\nI0919 13:41:56.937999       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:57.169120       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=5 deleting=1\nI0919 13:41:57.169444       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" relatedReplicaSets=[webserver-5c557bc5bf]\nI0919 13:41:57.169984       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-trx5f\"\nI0919 13:41:57.169745       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-5c557bc5bf to 5\"\nI0919 13:41:57.178583       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:57.199650       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-trx5f\"\nI0919 13:41:57.207070       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-9888/pvc-snr8q\"\nE0919 13:41:57.244043       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-2388/pvc-qx875: storageclass.storage.k8s.io \"provisioning-2388\" not found\nI0919 13:41:57.244090       1 event.go:294] \"Event occurred\" object=\"provisioning-2388/pvc-qx875\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2388\\\" not found\"\nI0919 13:41:57.253113       1 pv_controller.go:640] volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:57.262970       1 pv_controller.go:879] volume \"pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2\" entered phase \"Released\"\nI0919 13:41:57.277597       1 pv_controller.go:1340] isVolumeReleased[pvc-e902d0d5-07a1-400f-9d05-97b4542ff1d2]: volume is released\nI0919 13:41:57.301883       1 pv_controller_base.go:521] deletion of claim \"csi-mock-volumes-9888/pvc-snr8q\" was already processed\nI0919 13:41:57.366315       1 pv_controller.go:879] volume \"local-gc28x\" entered phase \"Available\"\nI0919 13:41:57.396690       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=1\nI0919 13:41:57.398100       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5c557bc5bf to 6\"\nI0919 13:41:57.407423       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-ndknx\"\nI0919 13:41:57.417955       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:57.751692       1 garbagecollector.go:475] \"Processing object\" object=\"proxy-8480/test-service-mgvlr\" objectUID=4b6af93b-f194-4e10-9c98-2f09f625eb45 kind=\"EndpointSlice\" virtual=false\nI0919 13:41:57.758760       1 garbagecollector.go:584] \"Deleting object\" object=\"proxy-8480/test-service-mgvlr\" objectUID=4b6af93b-f194-4e10-9c98-2f09f625eb45 kind=\"EndpointSlice\" propagationPolicy=Background\nI0919 13:41:57.801178       1 pv_controller.go:879] volume \"local-pvjw7c6\" entered phase \"Available\"\nI0919 13:41:57.883402       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-964/pvc-rb8nx\"\nI0919 13:41:57.899623       1 pv_controller.go:640] volume \"local-nzc9j\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:41:57.906421       1 pv_controller.go:930] claim \"persistent-local-volumes-test-1557/pvc-frrlx\" bound to volume \"local-pvjw7c6\"\nI0919 13:41:57.915766       1 pv_controller.go:879] volume \"local-nzc9j\" entered phase \"Released\"\nI0919 13:41:57.937967       1 pv_controller.go:879] volume \"local-pvjw7c6\" entered phase \"Bound\"\nI0919 13:41:57.938007       1 pv_controller.go:982] volume \"local-pvjw7c6\" bound to claim \"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:41:57.963521       1 pv_controller.go:823] claim \"persistent-local-volumes-test-1557/pvc-frrlx\" entered phase \"Bound\"\nI0919 13:41:58.000199       1 pv_controller_base.go:521] deletion of claim \"volume-964/pvc-rb8nx\" was already processed\nI0919 13:41:58.087974       1 garbagecollector.go:475] \"Processing object\" object=\"services-4026/nodeport-collision-1-c99sw\" objectUID=35d837b3-81ec-468b-9ed1-dec2e6476c1f kind=\"EndpointSlice\" virtual=false\nI0919 13:41:58.101429       1 garbagecollector.go:584] \"Deleting object\" object=\"services-4026/nodeport-collision-1-c99sw\" objectUID=35d837b3-81ec-468b-9ed1-dec2e6476c1f kind=\"EndpointSlice\" propagationPolicy=Background\nI0919 13:41:58.357967       1 garbagecollector.go:475] \"Processing object\" object=\"services-4026/nodeport-collision-2-hsq7z\" objectUID=cc2503c3-c461-4c07-8bf8-16a6f0a9997c kind=\"EndpointSlice\" virtual=false\nI0919 13:41:58.378594       1 garbagecollector.go:584] \"Deleting object\" object=\"services-4026/nodeport-collision-2-hsq7z\" objectUID=cc2503c3-c461-4c07-8bf8-16a6f0a9997c kind=\"EndpointSlice\" propagationPolicy=Background\nE0919 13:41:58.487929       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-2628/default: secrets \"default-token-6qx6x\" is forbidden: unable to create new content in namespace configmap-2628 because it is being terminated\nI0919 13:41:58.691543       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=1\nI0919 13:41:58.700432       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-64qm6\" objectUID=c465b1fc-b1b0-4a48-97d5-b85efc820e36 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:58.701641       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-zk4ct\"\nI0919 13:41:58.708775       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-64qm6\" objectUID=c465b1fc-b1b0-4a48-97d5-b85efc820e36 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:58.764161       1 namespace_controller.go:185] Namespace has been deleted provisioning-6187\nI0919 13:41:58.808952       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=1\nI0919 13:41:58.813325       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-9nmmj\" objectUID=54fef2f1-b222-480e-9afd-46c40a40d328 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:58.820234       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-9lf45\"\nI0919 13:41:58.822930       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-9nmmj\" objectUID=54fef2f1-b222-480e-9afd-46c40a40d328 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:58.925270       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=1\nI0919 13:41:58.930732       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-fzqq2\" objectUID=c1d19fcb-49d8-4ae3-92fb-05d6080b2496 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:58.931600       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-cx8wp\"\nI0919 13:41:58.937189       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-fzqq2\" objectUID=c1d19fcb-49d8-4ae3-92fb-05d6080b2496 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:59.043860       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=1\nI0919 13:41:59.046368       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-ndknx\" objectUID=0d58f4c3-de4c-4477-b6e0-d70839d694c3 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:59.050862       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-ndknx\" objectUID=0d58f4c3-de4c-4477-b6e0-d70839d694c3 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:59.051679       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-lvp7l\"\nI0919 13:41:59.088137       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-1257/aws8ttsj\"\nI0919 13:41:59.095957       1 pv_controller.go:640] volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:41:59.100433       1 pv_controller.go:879] volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" entered phase \"Released\"\nI0919 13:41:59.105385       1 pv_controller.go:1340] isVolumeReleased[pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9]: volume is released\nI0919 13:41:59.260928       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" need=2 deleting=1\nI0919 13:41:59.261189       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" relatedReplicaSets=[test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b test-rolling-update-with-lb-945c6c889]\nI0919 13:41:59.261381       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-9c68d7c8b\" pod=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-55ggk\"\nI0919 13:41:59.261621       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-9c68d7c8b to 2\"\nI0919 13:41:59.271359       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" need=2 creating=1\nI0919 13:41:59.276696       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-945c6c889 to 2\"\nI0919 13:41:59.276904       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-9c68d7c8b-55ggk\"\nI0919 13:41:59.281601       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-55ggk\" objectUID=7f85f9aa-f1fa-47fe-9b10-338b80e024cf kind=\"CiliumEndpoint\" virtual=false\nI0919 13:41:59.289156       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:59.289850       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-55ggk\" objectUID=7f85f9aa-f1fa-47fe-9b10-338b80e024cf kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:41:59.294378       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-945c6c889-sk8qm\"\nI0919 13:41:59.320394       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2069/pod-4cc19302-31d4-41dc-9169-af349e21e989\" PVC=\"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:41:59.320598       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:41:59.334656       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:41:59.588883       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=7 creating=1\nI0919 13:41:59.589522       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5c557bc5bf to 7\"\nI0919 13:41:59.600717       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-wk9zm\"\nI0919 13:41:59.612550       1 namespace_controller.go:185] Namespace has been deleted security-context-test-6555\nI0919 13:41:59.813067       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05a0f3ecbf9ddc4a6\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:41:59.829190       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05a0f3ecbf9ddc4a6\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:41:59.837470       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5c557bc5bf to 8\"\nI0919 13:41:59.837731       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=8 creating=1\nI0919 13:41:59.868268       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-b4qtd\"\nI0919 13:42:00.066540       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=7 deleting=1\nI0919 13:42:00.066585       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" relatedReplicaSets=[webserver-5c557bc5bf]\nI0919 13:42:00.067182       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-wk9zm\"\nI0919 13:42:00.071522       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-5c557bc5bf to 7\"\nI0919 13:42:00.090080       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-wk9zm\"\nI0919 13:42:00.117461       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2069/pod-4cc19302-31d4-41dc-9169-af349e21e989\" PVC=\"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:42:00.117642       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:42:00.132728       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-2069/pvc-ntj6k\"\nI0919 13:42:00.145291       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:00.145809       1 event.go:294] \"Event occurred\" object=\"cronjob-2237/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27200982\"\nI0919 13:42:00.157306       1 pv_controller.go:640] volume \"local-pv5qfnl\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:42:00.169653       1 pv_controller.go:879] volume \"local-pv5qfnl\" entered phase \"Released\"\nI0919 13:42:00.169768       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-2237/concurrent\" resourceVersion=\"18415\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:42:00.169785       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-2237/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0919 13:42:00.170730       1 event.go:294] \"Event occurred\" object=\"cronjob-2237/concurrent-27200982\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27200982--1-66mnr\"\nI0919 13:42:00.174245       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:00.179403       1 pv_controller_base.go:521] deletion of claim \"persistent-local-volumes-test-2069/pvc-ntj6k\" was already processed\nI0919 13:42:00.188521       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:00.195000       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:00.290059       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=8 creating=1\nI0919 13:42:00.291534       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-5c557bc5bf to 8\"\nI0919 13:42:00.304045       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-nrjxf\"\nI0919 13:42:00.303779       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:00.317372       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=2 creating=2\nI0919 13:42:00.317933       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-56fb65c6f6 to 2\"\nI0919 13:42:00.341306       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-jvb2d\"\nI0919 13:42:00.354037       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:00.369148       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-5c557bc5bf to 6\"\nI0919 13:42:00.369427       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 deleting=2\nI0919 13:42:00.369456       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6]\nI0919 13:42:00.369527       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-974ll\"\nI0919 13:42:00.370073       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-lvp7l\"\nI0919 13:42:00.370215       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-q59g9\"\nI0919 13:42:00.392817       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-974ll\" objectUID=0fe23790-7485-4616-ba51-445e994c056e kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:00.396844       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-lvp7l\"\nI0919 13:42:00.396925       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-56fb65c6f6\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:00.397843       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-974ll\"\nI0919 13:42:00.404340       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-974ll\" objectUID=0fe23790-7485-4616-ba51-445e994c056e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:00.409482       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-56fb65c6f6 to 4\"\nI0919 13:42:00.425923       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:42:00.447064       1 replica_set.go:536] sync \"deployment-9644/webserver-56fb65c6f6\" failed with Operation cannot be fulfilled on replicasets.apps \"webserver-56fb65c6f6\": the object has been modified; please apply your changes to the latest version and try again\nI0919 13:42:00.447436       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=4 creating=2\nI0919 13:42:00.498995       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-lscbs\"\nI0919 13:42:00.545794       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-295fz\"\nI0919 13:42:00.634859       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=1\nI0919 13:42:00.637165       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6180/test-rs\" need=1 creating=1\nI0919 13:42:00.642609       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-2fkg8\" objectUID=4e0c375a-8449-43e4-8f0b-d786f744566c kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:00.654535       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-2fkg8\" objectUID=4e0c375a-8449-43e4-8f0b-d786f744566c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:00.654852       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=4 creating=1\nI0919 13:42:00.693327       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-2735\nI0919 13:42:00.747633       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-hvlpb\"\nE0919 13:42:00.760913       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-1755/default: secrets \"default-token-s84bj\" is forbidden: unable to create new content in namespace ephemeral-1755 because it is being terminated\nI0919 13:42:00.801465       1 event.go:294] \"Event occurred\" object=\"replicaset-6180/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-h7jhz\"\nI0919 13:42:00.846239       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-ptbzk\"\nI0919 13:42:00.900922       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=6 creating=1\nE0919 13:42:01.072108       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-6517/default: secrets \"default-token-ptrpd\" is forbidden: unable to create new content in namespace statefulset-6517 because it is being terminated\nI0919 13:42:01.097559       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-5c557bc5bf-7978s\"\nE0919 13:42:02.761063       1 tokens_controller.go:262] error synchronizing serviceaccount endpointslice-9549/default: secrets \"default-token-rn2v2\" is forbidden: unable to create new content in namespace endpointslice-9549 because it is being terminated\nI0919 13:42:02.854064       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-293^448ba737-194f-11ec-a3a7-1ee7b3b50a4f\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:42:02.859583       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-293^448ba737-194f-11ec-a3a7-1ee7b3b50a4f\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:42:03.210355       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=5 deleting=1\nI0919 13:42:03.210394       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6]\nI0919 13:42:03.210535       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-hvlpb\"\nI0919 13:42:03.211325       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-5c557bc5bf to 5\"\nI0919 13:42:03.222626       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-hvlpb\"\nI0919 13:42:03.259867       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:03.262999       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=2 deleting=3\nI0919 13:42:03.263037       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:03.264251       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-7978s\"\nI0919 13:42:03.266522       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-b4qtd\"\nI0919 13:42:03.264144       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-5c557bc5bf to 2\"\nI0919 13:42:03.266603       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-zk4ct\"\nI0919 13:42:03.274435       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=3 creating=3\nI0919 13:42:03.275616       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6756b7b6d4 to 3\"\nI0919 13:42:03.287472       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:03.291395       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6756b7b6d4-jjtdl\"\nI0919 13:42:03.292314       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-zk4ct\"\nI0919 13:42:03.292340       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-7978s\"\nI0919 13:42:03.292351       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-b4qtd\"\nI0919 13:42:03.294419       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-b4qtd\" objectUID=ef63d470-7f83-4ea7-8b5b-bec7bc322e99 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:03.299720       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-zk4ct\" objectUID=43058325-8822-4b6a-ae6b-79b6e3ce861d kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:03.304668       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6756b7b6d4-df6hk\"\nI0919 13:42:03.314540       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-b4qtd\" objectUID=ef63d470-7f83-4ea7-8b5b-bec7bc322e99 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:03.315578       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-zk4ct\" objectUID=43058325-8822-4b6a-ae6b-79b6e3ce861d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:03.315972       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6756b7b6d4-cqz2q\"\nI0919 13:42:03.388494       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2d6b70e3-c55b-4bd3-be8d-28f09fe4b99c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-293^448ba737-194f-11ec-a3a7-1ee7b3b50a4f\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:42:03.408142       1 namespace_controller.go:185] Namespace has been deleted apply-2266\nI0919 13:42:03.466487       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8169-5833/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0919 13:42:03.519084       1 namespace_controller.go:185] Namespace has been deleted configmap-2628\nI0919 13:42:03.573373       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8169-5833/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE0919 13:42:03.728592       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-2069/default: secrets \"default-token-zg7hh\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-2069 because it is being terminated\nE0919 13:42:03.940454       1 tokens_controller.go:262] error synchronizing serviceaccount volume-2565/default: serviceaccounts \"default\" not found\nI0919 13:42:04.452879       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-vxrdq\" objectUID=2e888612-4114-4dd9-973b-976e325ea124 kind=\"Pod\" virtual=false\nI0919 13:42:04.452944       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-l9wlh\" objectUID=da9bc1ae-3eb3-407b-a9ff-3cca4e19c464 kind=\"Pod\" virtual=false\nI0919 13:42:04.453011       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-87c52\" objectUID=6bec1af6-3899-4d62-b9cc-db0d2c85a1a3 kind=\"Pod\" virtual=false\nI0919 13:42:04.453029       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-tlv5p\" objectUID=d4001bd3-51dc-4bc2-aeaf-0e292dce8619 kind=\"Pod\" virtual=false\nI0919 13:42:04.453588       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-r5ldt\" objectUID=86fd5d18-cc14-4808-84f6-0a41f7bea8b1 kind=\"Pod\" virtual=false\nI0919 13:42:04.453697       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-qlzkr\" objectUID=a4eca9c6-aa49-42b1-b9fd-1505f1f295db kind=\"Pod\" virtual=false\nI0919 13:42:04.453794       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-wdz6f\" objectUID=9fa48ad2-bad7-4313-b2c9-feaae0f39230 kind=\"Pod\" virtual=false\nI0919 13:42:04.453891       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-8rc67\" objectUID=039536f8-621b-499c-9df7-76470c7e550a kind=\"Pod\" virtual=false\nI0919 13:42:04.453988       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-5svqb\" objectUID=e5f0366b-25a2-489e-ac69-e72646710dc5 kind=\"Pod\" virtual=false\nI0919 13:42:04.454084       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-5qhr4\" objectUID=ea6f1268-c579-4071-8d41-bfce8bc36ada kind=\"Pod\" virtual=false\nI0919 13:42:04.454176       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-qjpqt\" objectUID=d516cd03-cfca-4a5d-a010-bcb9c7d2ca77 kind=\"Pod\" virtual=false\nI0919 13:42:04.454260       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-xwc7t\" objectUID=c5bf24df-8749-4933-a0ac-47a2ff961028 kind=\"Pod\" virtual=false\nI0919 13:42:04.454350       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-dm56r\" objectUID=fc08eb8e-9e49-4995-86aa-815e5ae17367 kind=\"Pod\" virtual=false\nI0919 13:42:04.454452       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-t8v7m\" objectUID=42c72984-428b-4552-ac08-e207506a8edd kind=\"Pod\" virtual=false\nI0919 13:42:04.454551       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-zzbcp\" objectUID=a93d802a-45b1-4bed-a521-5c58dcf6d57d kind=\"Pod\" virtual=false\nI0919 13:42:04.454654       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-mw7sc\" objectUID=df0ef751-2f84-4e2b-b163-4b4c5284f22d kind=\"Pod\" virtual=false\nI0919 13:42:04.454766       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-q5g9z\" objectUID=c80a7e44-090d-4db2-a109-85ac2af890e3 kind=\"Pod\" virtual=false\nI0919 13:42:04.454876       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-dzrmm\" objectUID=88307746-b7bf-4564-bbfe-ceff3648d5e3 kind=\"Pod\" virtual=false\nI0919 13:42:04.454980       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-ldsf9\" objectUID=89d886b2-b6e5-4673-a801-8bec49428fe1 kind=\"Pod\" virtual=false\nI0919 13:42:04.452965       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-dkj5g\" objectUID=2938a159-95f3-40a0-9c06-b732c5019842 kind=\"Pod\" virtual=false\nI0919 13:42:04.464767       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-qlzkr\" objectUID=a4eca9c6-aa49-42b1-b9fd-1505f1f295db kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.465031       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-t8v7m\" objectUID=42c72984-428b-4552-ac08-e207506a8edd kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.465358       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-8rc67\" objectUID=039536f8-621b-499c-9df7-76470c7e550a kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.465640       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-dm56r\" objectUID=fc08eb8e-9e49-4995-86aa-815e5ae17367 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.465849       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-wdz6f\" objectUID=9fa48ad2-bad7-4313-b2c9-feaae0f39230 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.468690       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-vxrdq\" objectUID=2e888612-4114-4dd9-973b-976e325ea124 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.469037       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-dzrmm\" objectUID=88307746-b7bf-4564-bbfe-ceff3648d5e3 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.470549       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-mw7sc\" objectUID=df0ef751-2f84-4e2b-b163-4b4c5284f22d kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.470839       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-ldsf9\" objectUID=89d886b2-b6e5-4673-a801-8bec49428fe1 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471003       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-zzbcp\" objectUID=a93d802a-45b1-4bed-a521-5c58dcf6d57d kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471155       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-l9wlh\" objectUID=da9bc1ae-3eb3-407b-a9ff-3cca4e19c464 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471300       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-xwc7t\" objectUID=c5bf24df-8749-4933-a0ac-47a2ff961028 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471374       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-5svqb\" objectUID=e5f0366b-25a2-489e-ac69-e72646710dc5 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471447       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-dkj5g\" objectUID=2938a159-95f3-40a0-9c06-b732c5019842 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471642       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-q5g9z\" objectUID=c80a7e44-090d-4db2-a109-85ac2af890e3 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471754       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-r5ldt\" objectUID=86fd5d18-cc14-4808-84f6-0a41f7bea8b1 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.471932       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-87c52\" objectUID=6bec1af6-3899-4d62-b9cc-db0d2c85a1a3 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.472034       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-5qhr4\" objectUID=ea6f1268-c579-4071-8d41-bfce8bc36ada kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.472160       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-tlv5p\" objectUID=d4001bd3-51dc-4bc2-aeaf-0e292dce8619 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.472303       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-qjpqt\" objectUID=d516cd03-cfca-4a5d-a010-bcb9c7d2ca77 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.474791       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-zvs22\" objectUID=0fa46dc1-6ecc-4e18-bbb0-3bd4f7b1d6a9 kind=\"Pod\" virtual=false\nI0919 13:42:04.486825       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-wf5dr\" objectUID=819ad1e9-cbdf-4eee-b5f1-1bcbec88bb9a kind=\"Pod\" virtual=false\nI0919 13:42:04.486923       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-9bv4v\" objectUID=4adcda58-9169-45b7-8537-ffc3b746daa1 kind=\"Pod\" virtual=false\nI0919 13:42:04.486993       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-8hwr9\" objectUID=66a33492-d954-439c-8c8e-f13c7642531e kind=\"Pod\" virtual=false\nI0919 13:42:04.487054       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-ppkqd\" objectUID=827a5604-8013-4a7e-a592-2e0688709247 kind=\"Pod\" virtual=false\nI0919 13:42:04.497566       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-7lcvs\" objectUID=a278cb5e-3021-4e2f-afb3-258869d94851 kind=\"Pod\" virtual=false\nI0919 13:42:04.498246       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-9nkd9\" objectUID=fd9ca2b1-0c62-41f2-9860-6d6afe4db7a8 kind=\"Pod\" virtual=false\nI0919 13:42:04.498421       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-d6wpd\" objectUID=2102cc8e-d0b6-42f7-b781-a85732a6baaa kind=\"Pod\" virtual=false\nI0919 13:42:04.500230       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-b2w8w\" objectUID=4b29157a-bbb9-4f31-93b6-ea5c8dd58824 kind=\"Pod\" virtual=false\nI0919 13:42:04.500260       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-xlxts\" objectUID=d4d4573b-3477-40cd-a72c-d08b8db96e2f kind=\"Pod\" virtual=false\nI0919 13:42:04.505061       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-cmnb4\" objectUID=fbe2b005-0b39-44fa-b849-8eb1204737a5 kind=\"Pod\" virtual=false\nI0919 13:42:04.513012       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-sgxpl\" objectUID=cc24c7e5-d1c8-4f2c-8fc5-de0dbb7be48f kind=\"Pod\" virtual=false\nI0919 13:42:04.540911       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-j94nk\" objectUID=da51f325-cce3-4d50-8b32-a37cb2c2c4f5 kind=\"Pod\" virtual=false\nI0919 13:42:04.558216       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-5ms9b\" objectUID=72dfc731-be88-4624-8fe8-af0b3b2471c8 kind=\"Pod\" virtual=false\nI0919 13:42:04.583786       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-2g7qj\" objectUID=103edcec-859c-46d5-bc86-f29ae4b7d767 kind=\"Pod\" virtual=false\nI0919 13:42:04.608170       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-2cdw2\" objectUID=ea1123ed-e3d4-41de-bc26-be9d038a5608 kind=\"Pod\" virtual=false\nI0919 13:42:04.632571       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-qx5sr\" objectUID=2771567e-c608-405a-a5d0-9d755cfd927b kind=\"Pod\" virtual=false\nI0919 13:42:04.658098       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-ll4ch\" objectUID=0657c507-7462-43cc-9ef6-cc8ff6aa6d4b kind=\"Pod\" virtual=false\nI0919 13:42:04.684155       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-hxx57\" objectUID=e26363de-abd2-4442-a94b-8ef0e9f6c8fe kind=\"Pod\" virtual=false\nI0919 13:42:04.707740       1 garbagecollector.go:475] \"Processing object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-69b4z\" objectUID=168ed28a-2bc5-4122-b55b-93a04ae2fcfb kind=\"Pod\" virtual=false\nI0919 13:42:04.732315       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-zvs22\" objectUID=0fa46dc1-6ecc-4e18-bbb0-3bd4f7b1d6a9 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.755719       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-wf5dr\" objectUID=819ad1e9-cbdf-4eee-b5f1-1bcbec88bb9a kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.780484       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-9bv4v\" objectUID=4adcda58-9169-45b7-8537-ffc3b746daa1 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.805662       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-8hwr9\" objectUID=66a33492-d954-439c-8c8e-f13c7642531e kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.830968       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-ppkqd\" objectUID=827a5604-8013-4a7e-a592-2e0688709247 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.855436       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-7lcvs\" objectUID=a278cb5e-3021-4e2f-afb3-258869d94851 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.880879       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-9nkd9\" objectUID=fd9ca2b1-0c62-41f2-9860-6d6afe4db7a8 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.916418       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-d6wpd\" objectUID=2102cc8e-d0b6-42f7-b781-a85732a6baaa kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.931154       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-b2w8w\" objectUID=4b29157a-bbb9-4f31-93b6-ea5c8dd58824 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:04.955773       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-xlxts\" objectUID=d4d4573b-3477-40cd-a72c-d08b8db96e2f kind=\"Pod\" propagationPolicy=Background\nE0919 13:42:04.975125       1 tokens_controller.go:262] error synchronizing serviceaccount volume-964/default: secrets \"default-token-29l5m\" is forbidden: unable to create new content in namespace volume-964 because it is being terminated\nI0919 13:42:04.980108       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-cmnb4\" objectUID=fbe2b005-0b39-44fa-b849-8eb1204737a5 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.017660       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-sgxpl\" objectUID=cc24c7e5-d1c8-4f2c-8fc5-de0dbb7be48f kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.042279       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-j94nk\" objectUID=da51f325-cce3-4d50-8b32-a37cb2c2c4f5 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.058239       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-5ms9b\" objectUID=72dfc731-be88-4624-8fe8-af0b3b2471c8 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.080709       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-2g7qj\" objectUID=103edcec-859c-46d5-bc86-f29ae4b7d767 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.110727       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-2cdw2\" objectUID=ea1123ed-e3d4-41de-bc26-be9d038a5608 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.137938       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-qx5sr\" objectUID=2771567e-c608-405a-a5d0-9d755cfd927b kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.154995       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-ll4ch\" objectUID=0657c507-7462-43cc-9ef6-cc8ff6aa6d4b kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.180625       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-hxx57\" objectUID=e26363de-abd2-4442-a94b-8ef0e9f6c8fe kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.208765       1 garbagecollector.go:584] \"Deleting object\" object=\"kubelet-2164/cleanup40-56605450-4c51-4138-88be-2583759cd383-69b4z\" objectUID=168ed28a-2bc5-4122-b55b-93a04ae2fcfb kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.531266       1 event.go:294] \"Event occurred\" object=\"volume-expand-8303/awsqhgrf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:42:05.537992       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-8303/awsqhgrf\"\nI0919 13:42:05.677575       1 namespace_controller.go:185] Namespace has been deleted provisioning-293\nI0919 13:42:05.681796       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 2\"\nI0919 13:42:05.689071       1 garbagecollector.go:475] \"Processing object\" object=\"provisioning-293-6045/csi-hostpathplugin-5b5db7b9dd\" objectUID=6c1bff6f-cfaf-4bcf-b548-216b05a7a1b6 kind=\"ControllerRevision\" virtual=false\nI0919 13:42:05.689249       1 garbagecollector.go:475] \"Processing object\" object=\"provisioning-293-6045/csi-hostpathplugin-0\" objectUID=65688f42-81bc-4011-a569-4fb93bd06e6e kind=\"Pod\" virtual=false\nI0919 13:42:05.689263       1 stateful_set.go:440] StatefulSet has been deleted provisioning-293-6045/csi-hostpathplugin\nI0919 13:42:05.692573       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-56fb65c6f6\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:05.704242       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6756b7b6d4 to 2\"\nI0919 13:42:05.704374       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=2 deleting=1\nI0919 13:42:05.704414       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" relatedReplicaSets=[webserver-6756b7b6d4 webserver-5c557bc5bf webserver-56fb65c6f6]\nI0919 13:42:05.704513       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-6756b7b6d4\" pod=\"deployment-9644/webserver-6756b7b6d4-df6hk\"\nI0919 13:42:05.716311       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:05.727702       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=5 creating=1\nI0919 13:42:05.731651       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-56fb65c6f6 to 5\"\nI0919 13:42:05.731905       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4-df6hk\" objectUID=3ec5d7a0-4f38-4b98-b116-27a8ef05131b kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:05.732589       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6756b7b6d4-df6hk\"\nI0919 13:42:05.740130       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-vq4tj\"\nI0919 13:42:05.748694       1 garbagecollector.go:584] \"Deleting object\" object=\"provisioning-293-6045/csi-hostpathplugin-5b5db7b9dd\" objectUID=6c1bff6f-cfaf-4bcf-b548-216b05a7a1b6 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:42:05.762058       1 garbagecollector.go:584] \"Deleting object\" object=\"provisioning-293-6045/csi-hostpathplugin-0\" objectUID=65688f42-81bc-4011-a569-4fb93bd06e6e kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:05.762324       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:05.780842       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4-df6hk\" objectUID=3ec5d7a0-4f38-4b98-b116-27a8ef05131b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:05.962481       1 namespace_controller.go:185] Namespace has been deleted ephemeral-1755\nI0919 13:42:06.113924       1 namespace_controller.go:185] Namespace has been deleted statefulset-6517\nI0919 13:42:06.187006       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 3\"\nI0919 13:42:06.195420       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-6756b7b6d4\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:06.202090       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=4 deleting=1\nI0919 13:42:06.202319       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" relatedReplicaSets=[webserver-6756b7b6d4 webserver-5c557bc5bf webserver-56fb65c6f6]\nI0919 13:42:06.202504       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-56fb65c6f6\" pod=\"deployment-9644/webserver-56fb65c6f6-vq4tj\"\nI0919 13:42:06.202508       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-56fb65c6f6 to 4\"\nI0919 13:42:06.210141       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:06.211199       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-56fb65c6f6-vq4tj\"\nI0919 13:42:06.214769       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=3 creating=1\nI0919 13:42:06.217314       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6756b7b6d4 to 3\"\nI0919 13:42:06.223293       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6756b7b6d4-9qm6h\"\nI0919 13:42:06.311811       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:06.449703       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-9c68d7c8b to 1\"\nI0919 13:42:06.449887       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" need=1 deleting=1\nI0919 13:42:06.449916       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" relatedReplicaSets=[test-rolling-update-with-lb-945c6c889 test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b]\nI0919 13:42:06.449982       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-9c68d7c8b\" pod=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-wwl7f\"\nI0919 13:42:06.459408       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" need=3 creating=1\nI0919 13:42:06.460560       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-945c6c889 to 3\"\nI0919 13:42:06.467144       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-945c6c889-7qw7q\"\nI0919 13:42:06.474006       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-wwl7f\" objectUID=8aca327d-98b8-451b-90aa-a2936b3ad1c4 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:06.480441       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-9c68d7c8b-wwl7f\"\nE0919 13:42:06.484159       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-2506/pvc-t5slz: storageclass.storage.k8s.io \"provisioning-2506\" not found\nI0919 13:42:06.484366       1 event.go:294] \"Event occurred\" object=\"provisioning-2506/pvc-t5slz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-2506\\\" not found\"\nI0919 13:42:06.487535       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-wwl7f\" objectUID=8aca327d-98b8-451b-90aa-a2936b3ad1c4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:06.596586       1 pv_controller.go:879] volume \"local-r6ncv\" entered phase \"Available\"\nI0919 13:42:06.774053       1 pv_controller.go:1340] isVolumeReleased[pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9]: volume is released\nI0919 13:42:06.907898       1 pv_controller_base.go:521] deletion of claim \"volume-expand-1257/aws8ttsj\" was already processed\nI0919 13:42:07.091543       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-9888-5152/csi-mockplugin\nI0919 13:42:07.091618       1 garbagecollector.go:475] \"Processing object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-5f8d7c9bbb\" objectUID=85dab408-42d9-4882-9b61-6c8e31dea1e8 kind=\"ControllerRevision\" virtual=false\nI0919 13:42:07.091719       1 garbagecollector.go:475] \"Processing object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-0\" objectUID=843c3915-fdcf-4ccf-a198-ba68db44fdff kind=\"Pod\" virtual=false\nI0919 13:42:07.093701       1 garbagecollector.go:584] \"Deleting object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-5f8d7c9bbb\" objectUID=85dab408-42d9-4882-9b61-6c8e31dea1e8 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:42:07.094120       1 garbagecollector.go:584] \"Deleting object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-0\" objectUID=843c3915-fdcf-4ccf-a198-ba68db44fdff kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:07.346318       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-ad805cdd-72b0-440a-bd75-af914e43a0f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05a0f3ecbf9ddc4a6\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:07.349920       1 garbagecollector.go:475] \"Processing object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-attacher-6ff4fc5554\" objectUID=6d3c833d-a1ed-4f58-8565-ec9354537cc7 kind=\"ControllerRevision\" virtual=false\nI0919 13:42:07.350130       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-9888-5152/csi-mockplugin-attacher\nI0919 13:42:07.350239       1 garbagecollector.go:475] \"Processing object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-attacher-0\" objectUID=55bdb573-4ac9-406d-b727-6ef2af47d714 kind=\"Pod\" virtual=false\nI0919 13:42:07.356692       1 garbagecollector.go:584] \"Deleting object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-attacher-0\" objectUID=55bdb573-4ac9-406d-b727-6ef2af47d714 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:07.356794       1 garbagecollector.go:584] \"Deleting object\" object=\"csi-mock-volumes-9888-5152/csi-mockplugin-attacher-6ff4fc5554\" objectUID=6d3c833d-a1ed-4f58-8565-ec9354537cc7 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:42:07.544600       1 pv_controller.go:930] claim \"provisioning-2388/pvc-qx875\" bound to volume \"local-gc28x\"\nI0919 13:42:07.552542       1 pv_controller.go:879] volume \"local-gc28x\" entered phase \"Bound\"\nI0919 13:42:07.552624       1 pv_controller.go:982] volume \"local-gc28x\" bound to claim \"provisioning-2388/pvc-qx875\"\nI0919 13:42:07.559862       1 pv_controller.go:823] claim \"provisioning-2388/pvc-qx875\" entered phase \"Bound\"\nI0919 13:42:07.560171       1 pv_controller.go:930] claim \"provisioning-2506/pvc-t5slz\" bound to volume \"local-r6ncv\"\nI0919 13:42:07.568937       1 pv_controller.go:879] volume \"local-r6ncv\" entered phase \"Bound\"\nI0919 13:42:07.568968       1 pv_controller.go:982] volume \"local-r6ncv\" bound to claim \"provisioning-2506/pvc-t5slz\"\nI0919 13:42:07.576047       1 pv_controller.go:823] claim \"provisioning-2506/pvc-t5slz\" entered phase \"Bound\"\nI0919 13:42:07.576450       1 pv_controller.go:930] claim \"volume-421/pvc-82rmx\" bound to volume \"local-l8svp\"\nI0919 13:42:07.583376       1 pv_controller.go:879] volume \"local-l8svp\" entered phase \"Bound\"\nI0919 13:42:07.583406       1 pv_controller.go:982] volume \"local-l8svp\" bound to claim \"volume-421/pvc-82rmx\"\nI0919 13:42:07.590034       1 pv_controller.go:823] claim \"volume-421/pvc-82rmx\" entered phase \"Bound\"\nI0919 13:42:07.903591       1 namespace_controller.go:185] Namespace has been deleted endpointslice-9549\nI0919 13:42:07.939195       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9888\nI0919 13:42:08.479708       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=5 creating=1\nI0919 13:42:08.480453       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-56fb65c6f6 to 5\"\nI0919 13:42:08.483998       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-79lbc\"\nI0919 13:42:08.532314       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=4 deleting=1\nI0919 13:42:08.533744       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:08.533998       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-56fb65c6f6\" pod=\"deployment-9644/webserver-56fb65c6f6-79lbc\"\nI0919 13:42:08.534829       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-56fb65c6f6 to 4\"\nI0919 13:42:08.548004       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:08.551018       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-56fb65c6f6-79lbc\"\nI0919 13:42:08.555141       1 namespace_controller.go:185] Namespace has been deleted proxy-8480\nI0919 13:42:08.557366       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=4 creating=1\nI0919 13:42:08.565049       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6756b7b6d4 to 4\"\nI0919 13:42:08.575867       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6756b7b6d4-fqvwl\"\nI0919 13:42:08.856667       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2069\nI0919 13:42:08.929017       1 namespace_controller.go:185] Namespace has been deleted services-4026\nI0919 13:42:08.987559       1 namespace_controller.go:185] Namespace has been deleted volume-2565\nI0919 13:42:09.118789       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8169/pvc-p4szb\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8169\\\" or manually created by system administrator\"\nI0919 13:42:09.139394       1 pv_controller.go:879] volume \"pvc-91321fd3-ebb8-48b0-a5d5-f8cb53ce2471\" entered phase \"Bound\"\nI0919 13:42:09.139428       1 pv_controller.go:982] volume \"pvc-91321fd3-ebb8-48b0-a5d5-f8cb53ce2471\" bound to claim \"csi-mock-volumes-8169/pvc-p4szb\"\nI0919 13:42:09.154394       1 pv_controller.go:823] claim \"csi-mock-volumes-8169/pvc-p4szb\" entered phase \"Bound\"\nI0919 13:42:09.601878       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-91321fd3-ebb8-48b0-a5d5-f8cb53ce2471\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8169^4\") from node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:42:10.108149       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-91321fd3-ebb8-48b0-a5d5-f8cb53ce2471\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8169^4\") from node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:42:10.108732       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8169/pvc-volume-tester-2nz9z\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-91321fd3-ebb8-48b0-a5d5-f8cb53ce2471\\\" \"\nI0919 13:42:10.210058       1 namespace_controller.go:185] Namespace has been deleted volume-964\nW0919 13:42:10.330936       1 reconciler.go:222] attacherDetacher.DetachVolume started for volume \"pvc-a57fe52f-75fc-4d82-854a-ebb409af0676\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8117^863fe045-194e-11ec-8024-16a6c4327bf2\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" This volume is not safe to detach, but maxWaitForUnmountDuration 6m0s expired, force detaching\nI0919 13:42:10.450561       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=3 deleting=1\nI0919 13:42:10.450743       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" relatedReplicaSets=[webserver-56fb65c6f6 webserver-6756b7b6d4 webserver-5c557bc5bf]\nI0919 13:42:10.450916       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-56fb65c6f6\" pod=\"deployment-9644/webserver-56fb65c6f6-295fz\"\nI0919 13:42:10.451813       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-56fb65c6f6 to 3\"\nI0919 13:42:10.461399       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=5 creating=1\nI0919 13:42:10.462808       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6756b7b6d4 to 5\"\nI0919 13:42:10.467449       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-56fb65c6f6-295fz\"\nI0919 13:42:10.468361       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-56fb65c6f6-295fz\" objectUID=e59270f9-f12f-4a42-acd6-26b936ce200e kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:10.473130       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:10.474609       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6756b7b6d4-bh86n\"\nI0919 13:42:10.520973       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-56fb65c6f6-295fz\" objectUID=e59270f9-f12f-4a42-acd6-26b936ce200e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:10.884824       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-a57fe52f-75fc-4d82-854a-ebb409af0676\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-8117^863fe045-194e-11ec-8024-16a6c4327bf2\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:42:11.011048       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=6 creating=1\nI0919 13:42:11.014835       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6756b7b6d4 to 6\"\nI0919 13:42:11.023888       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-56fb65c6f6 to 4\"\nI0919 13:42:11.024301       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=4 creating=1\nE0919 13:42:11.028969       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:11.029463       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6756b7b6d4-v6qc8\"\nI0919 13:42:11.058687       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-phnb2\"\nI0919 13:42:11.080080       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:42:11.189393       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-8303/default: secrets \"default-token-p55bl\" is forbidden: unable to create new content in namespace volume-expand-8303 because it is being terminated\nI0919 13:42:11.279595       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-595/test-recreate-deployment-7f9bc4579c\" need=1 creating=1\nI0919 13:42:11.280867       1 event.go:294] \"Event occurred\" object=\"deployment-595/test-recreate-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-recreate-deployment-7f9bc4579c to 1\"\nI0919 13:42:11.293712       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-595/test-recreate-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-recreate-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:11.294444       1 event.go:294] \"Event occurred\" object=\"deployment-595/test-recreate-deployment-7f9bc4579c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-recreate-deployment-7f9bc4579c-tmf7n\"\nI0919 13:42:11.308937       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-595/test-recreate-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-recreate-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:11.367686       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1557/pod-351d5fd3-d60b-4c2b-a8ea-33beab440a1e\" PVC=\"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:42:11.367723       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:42:12.606137       1 namespace_controller.go:185] Namespace has been deleted emptydir-1723\nI0919 13:42:14.244830       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:14.928845       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-4236/pvc-cn4qk\"\nI0919 13:42:14.941190       1 pv_controller.go:640] volume \"local-2fx9l\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:42:14.945241       1 pv_controller.go:879] volume \"local-2fx9l\" entered phase \"Released\"\nI0919 13:42:15.044059       1 pv_controller_base.go:521] deletion of claim \"volume-4236/pvc-cn4qk\" was already processed\nI0919 13:42:15.884546       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=5 deleting=1\nI0919 13:42:15.884583       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:15.884948       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-6756b7b6d4\" pod=\"deployment-9644/webserver-6756b7b6d4-bh86n\"\nI0919 13:42:15.888598       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6756b7b6d4 to 5\"\nI0919 13:42:15.891445       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4-bh86n\" objectUID=23ac7857-c568-4a72-915e-1b8eb3684d37 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:15.894056       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=3 deleting=1\nI0919 13:42:15.894091       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:15.894273       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-56fb65c6f6\" pod=\"deployment-9644/webserver-56fb65c6f6-phnb2\"\nI0919 13:42:15.894958       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6756b7b6d4-bh86n\"\nI0919 13:42:15.898325       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4-bh86n\" objectUID=23ac7857-c568-4a72-915e-1b8eb3684d37 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:15.900231       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-56fb65c6f6 to 3\"\nI0919 13:42:15.918786       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:15.925024       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-56fb65c6f6-phnb2\" objectUID=e1e06d70-f2fb-4927-b0f9-13e1bf8464aa kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:15.925913       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-56fb65c6f6-phnb2\"\nI0919 13:42:15.930483       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:15.933876       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-56fb65c6f6-phnb2\" objectUID=e1e06d70-f2fb-4927-b0f9-13e1bf8464aa kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:16.391319       1 namespace_controller.go:185] Namespace has been deleted volume-expand-8303\nE0919 13:42:16.621208       1 namespace_controller.go:162] deletion of namespace webhook-6232 failed: unexpected items still remain in namespace: webhook-6232 for gvr: /v1, Resource=pods\nI0919 13:42:17.457097       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-2992-6340/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0919 13:42:17.730124       1 namespace_controller.go:185] Namespace has been deleted volume-9609\nI0919 13:42:17.934710       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-9c68d7c8b to 0\"\nI0919 13:42:17.935300       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" need=0 deleting=1\nI0919 13:42:17.936793       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" relatedReplicaSets=[test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b test-rolling-update-with-lb-945c6c889]\nI0919 13:42:17.936977       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-9c68d7c8b\" pod=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-sjkqv\"\nI0919 13:42:17.951132       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-9c68d7c8b-sjkqv\"\nI0919 13:42:17.952785       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-sjkqv\" objectUID=260ce8db-e0e3-42fd-9f1a-6dea6752831e kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:17.962919       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-9c68d7c8b-sjkqv\" objectUID=260ce8db-e0e3-42fd-9f1a-6dea6752831e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:19.759327       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-6bb6c4d99b\" need=1 creating=1\nI0919 13:42:19.760652       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-6bb6c4d99b to 1\"\nI0919 13:42:19.768474       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-6bb6c4d99b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-6bb6c4d99b-9kkqp\"\nI0919 13:42:19.778754       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:19.986104       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1257\nI0919 13:42:20.658740       1 namespace_controller.go:185] Namespace has been deleted kubectl-1035\nI0919 13:42:21.164321       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" need=2 deleting=1\nI0919 13:42:21.164445       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" relatedReplicaSets=[test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b test-rolling-update-with-lb-945c6c889 test-rolling-update-with-lb-6bb6c4d99b]\nI0919 13:42:21.164656       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-945c6c889\" pod=\"deployment-6925/test-rolling-update-with-lb-945c6c889-sk8qm\"\nI0919 13:42:21.165438       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-945c6c889 to 2\"\nI0919 13:42:21.174718       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889-sk8qm\" objectUID=03685526-2fda-4383-9884-384818419d78 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:21.175465       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-945c6c889-sk8qm\"\nI0919 13:42:21.179111       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-6bb6c4d99b\" need=2 creating=1\nI0919 13:42:21.182732       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-6bb6c4d99b to 2\"\nI0919 13:42:21.191197       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889-sk8qm\" objectUID=03685526-2fda-4383-9884-384818419d78 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:21.193304       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:21.194853       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-6bb6c4d99b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-6bb6c4d99b-5q24n\"\nI0919 13:42:21.243953       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:21.244641       1 event.go:294] \"Event occurred\" object=\"cronjob-2237/concurrent-27200982\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0919 13:42:21.257688       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:21.258158       1 event.go:294] \"Event occurred\" object=\"cronjob-2237/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: concurrent-27200982, status: Complete\"\nE0919 13:42:21.295278       1 tokens_controller.go:262] error synchronizing serviceaccount volume-4236/default: secrets \"default-token-sr57f\" is forbidden: unable to create new content in namespace volume-4236 because it is being terminated\nE0919 13:42:21.995872       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:22.479340       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-945c6c889 to 1\"\nI0919 13:42:22.479540       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" need=1 deleting=1\nI0919 13:42:22.479572       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" relatedReplicaSets=[test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b test-rolling-update-with-lb-945c6c889 test-rolling-update-with-lb-6bb6c4d99b]\nI0919 13:42:22.479657       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-945c6c889\" pod=\"deployment-6925/test-rolling-update-with-lb-945c6c889-7qw7q\"\nI0919 13:42:22.495939       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-6bb6c4d99b to 3\"\nI0919 13:42:22.497521       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-6bb6c4d99b\" need=3 creating=1\nI0919 13:42:22.497868       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889-7qw7q\" objectUID=50979f94-65cd-4405-aa22-422ae1c793dc kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:22.504300       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-945c6c889-7qw7q\"\nI0919 13:42:22.507579       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889-7qw7q\" objectUID=50979f94-65cd-4405-aa22-422ae1c793dc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:22.518828       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-6bb6c4d99b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-6bb6c4d99b-lzbgd\"\nI0919 13:42:22.522831       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:22.523186       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint deployment-6925/test-rolling-update-with-lb: Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:42:23.252308       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:24.526703       1 namespace_controller.go:185] Namespace has been deleted proxy-6772\nI0919 13:42:25.625682       1 pv_controller.go:879] volume \"local-pv77c59\" entered phase \"Available\"\nI0919 13:42:25.734311       1 pv_controller.go:930] claim \"persistent-local-volumes-test-1072/pvc-lbj8d\" bound to volume \"local-pv77c59\"\nI0919 13:42:25.742730       1 pv_controller.go:879] volume \"local-pv77c59\" entered phase \"Bound\"\nI0919 13:42:25.742760       1 pv_controller.go:982] volume \"local-pv77c59\" bound to claim \"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:25.749211       1 pv_controller.go:823] claim \"persistent-local-volumes-test-1072/pvc-lbj8d\" entered phase \"Bound\"\nI0919 13:42:25.786669       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:25.790552       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 4\"\nI0919 13:42:25.795701       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on replicasets.apps \\\"webserver-56fb65c6f6\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:25.802680       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6756b7b6d4 to 3\"\nI0919 13:42:25.802961       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=3 deleting=2\nI0919 13:42:25.802990       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:25.803148       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-6756b7b6d4\" pod=\"deployment-9644/webserver-6756b7b6d4-jjtdl\"\nI0919 13:42:25.803234       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-6756b7b6d4\" pod=\"deployment-9644/webserver-6756b7b6d4-v6qc8\"\nI0919 13:42:25.807360       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" need=0 deleting=2\nI0919 13:42:25.807394       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-5c557bc5bf\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:25.807982       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-5c557bc5bf to 0\"\nI0919 13:42:25.808169       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-9lf45\"\nI0919 13:42:25.808281       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-5c557bc5bf\" pod=\"deployment-9644/webserver-5c557bc5bf-cx8wp\"\nI0919 13:42:25.811521       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:25.817264       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6756b7b6d4-jjtdl\"\nI0919 13:42:25.817498       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4-jjtdl\" objectUID=4ebd3479-3607-4e2e-881e-10ecb77be9db kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:25.829527       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=7 creating=4\nI0919 13:42:25.830456       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-56fb65c6f6 to 7\"\nI0919 13:42:25.852201       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4-v6qc8\" objectUID=0f950bd9-8f28-4646-80e8-adf0ccda2f38 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:25.854003       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4-jjtdl\" objectUID=4ebd3479-3607-4e2e-881e-10ecb77be9db kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:25.854500       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6756b7b6d4-v6qc8\"\nI0919 13:42:25.857499       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-cx8wp\"\nI0919 13:42:25.861410       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-5c557bc5bf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-5c557bc5bf-9lf45\"\nI0919 13:42:25.863884       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-9lf45\" objectUID=0b31fb45-a79c-49be-95de-f3a5c5910ab0 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:25.866941       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf-cx8wp\" objectUID=f1ab0f4f-7475-4ac8-a8af-54027fdd3157 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:25.871601       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4-v6qc8\" objectUID=0f950bd9-8f28-4646-80e8-adf0ccda2f38 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:25.873599       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-sqz2w\"\nI0919 13:42:25.876041       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-9lf45\" objectUID=0b31fb45-a79c-49be-95de-f3a5c5910ab0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:25.883834       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:25.883927       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf-cx8wp\" objectUID=f1ab0f4f-7475-4ac8-a8af-54027fdd3157 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:25.893781       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-8stmq\"\nI0919 13:42:25.894191       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-gx627\"\nI0919 13:42:25.927991       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-6ntfq\"\nE0919 13:42:26.003301       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-4011/default: secrets \"default-token-zprrm\" is forbidden: unable to create new content in namespace emptydir-4011 because it is being terminated\nE0919 13:42:26.250092       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:26.349392       1 namespace_controller.go:185] Namespace has been deleted volume-4236\nI0919 13:42:26.508943       1 pv_controller.go:879] volume \"local-pvvvvtc\" entered phase \"Available\"\nI0919 13:42:26.612478       1 pv_controller.go:930] claim \"persistent-local-volumes-test-2725/pvc-vvdxz\" bound to volume \"local-pvvvvtc\"\nI0919 13:42:26.629827       1 pv_controller.go:879] volume \"local-pvvvvtc\" entered phase \"Bound\"\nI0919 13:42:26.630333       1 pv_controller.go:982] volume \"local-pvvvvtc\" bound to claim \"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:42:26.637499       1 pv_controller.go:823] claim \"persistent-local-volumes-test-2725/pvc-vvdxz\" entered phase \"Bound\"\nI0919 13:42:27.184779       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=2 deleting=1\nI0919 13:42:27.184813       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:27.184943       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-6756b7b6d4\" pod=\"deployment-9644/webserver-6756b7b6d4-cqz2q\"\nI0919 13:42:27.185110       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6756b7b6d4 to 2\"\nI0919 13:42:27.195176       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4-cqz2q\" objectUID=5ce07e9f-5903-45c8-b21a-6aea04c1f955 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:27.196409       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=8 creating=1\nI0919 13:42:27.196921       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-56fb65c6f6 to 8\"\nI0919 13:42:27.197160       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6756b7b6d4-cqz2q\"\nI0919 13:42:27.211236       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-56fb65c6f6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-56fb65c6f6-5rw2m\"\nI0919 13:42:27.212892       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4-cqz2q\" objectUID=5ce07e9f-5903-45c8-b21a-6aea04c1f955 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:27.364319       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=1 deleting=1\nI0919 13:42:27.364522       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:27.364904       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-6756b7b6d4\" pod=\"deployment-9644/webserver-6756b7b6d4-fqvwl\"\nI0919 13:42:27.365773       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6756b7b6d4 to 1\"\nI0919 13:42:27.395798       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4-fqvwl\" objectUID=7692615d-ba49-4ca8-ac52-813a8351f119 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:27.396630       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6756b7b6d4-fqvwl\"\nI0919 13:42:27.405115       1 namespace_controller.go:185] Namespace has been deleted pods-8417\nI0919 13:42:27.407661       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4-fqvwl\" objectUID=7692615d-ba49-4ca8-ac52-813a8351f119 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:27.851343       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" need=0 deleting=1\nI0919 13:42:27.851897       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-9644/webserver-6756b7b6d4\" relatedReplicaSets=[webserver-5c557bc5bf webserver-56fb65c6f6 webserver-6756b7b6d4]\nI0919 13:42:27.852083       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-6756b7b6d4\" pod=\"deployment-9644/webserver-6756b7b6d4-9qm6h\"\nI0919 13:42:27.852712       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6756b7b6d4 to 0\"\nI0919 13:42:27.866396       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4-9qm6h\" objectUID=7824d39e-db45-4f92-b422-db8d3d55077f kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:27.867104       1 event.go:294] \"Event occurred\" object=\"deployment-9644/webserver-6756b7b6d4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6756b7b6d4-9qm6h\"\nI0919 13:42:27.869480       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9644/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:27.875508       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4-9qm6h\" objectUID=7824d39e-db45-4f92-b422-db8d3d55077f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:28.230365       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-2992/pvc-45x7c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2992\\\" or manually created by system administrator\"\nI0919 13:42:28.230457       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-2992/pvc-45x7c\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2992\\\" or manually created by system administrator\"\nI0919 13:42:28.270186       1 pv_controller.go:879] volume \"pvc-858517c7-c76a-4b36-9c25-b00d377ad9ef\" entered phase \"Bound\"\nI0919 13:42:28.270282       1 pv_controller.go:982] volume \"pvc-858517c7-c76a-4b36-9c25-b00d377ad9ef\" bound to claim \"csi-mock-volumes-2992/pvc-45x7c\"\nI0919 13:42:28.278332       1 pv_controller.go:823] claim \"csi-mock-volumes-2992/pvc-45x7c\" entered phase \"Bound\"\nI0919 13:42:29.993297       1 expand_controller.go:289] Ignoring the PVC \"csi-mock-volumes-8169/pvc-p4szb\" (uid: \"91321fd3-ebb8-48b0-a5d5-f8cb53ce2471\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0919 13:42:29.993896       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8169/pvc-p4szb\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0919 13:42:31.129636       1 namespace_controller.go:185] Namespace has been deleted emptydir-4011\nE0919 13:42:33.663826       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:34.008261       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-421/pvc-82rmx\"\nI0919 13:42:34.014337       1 pv_controller.go:640] volume \"local-l8svp\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:42:34.018227       1 pv_controller.go:879] volume \"local-l8svp\" entered phase \"Released\"\nI0919 13:42:34.052299       1 garbagecollector.go:475] \"Processing object\" object=\"cronjob-2237/concurrent-27200981\" objectUID=c3c67c56-5c3a-45eb-972e-8cf0153c9d4c kind=\"Job\" virtual=false\nI0919 13:42:34.052373       1 garbagecollector.go:475] \"Processing object\" object=\"cronjob-2237/concurrent-27200982\" objectUID=ae223c83-380b-40ac-92a0-76cd83421450 kind=\"Job\" virtual=false\nI0919 13:42:34.054514       1 garbagecollector.go:584] \"Deleting object\" object=\"cronjob-2237/concurrent-27200981\" objectUID=c3c67c56-5c3a-45eb-972e-8cf0153c9d4c kind=\"Job\" propagationPolicy=Background\nI0919 13:42:34.054634       1 garbagecollector.go:584] \"Deleting object\" object=\"cronjob-2237/concurrent-27200982\" objectUID=ae223c83-380b-40ac-92a0-76cd83421450 kind=\"Job\" propagationPolicy=Background\nI0919 13:42:34.058418       1 garbagecollector.go:475] \"Processing object\" object=\"cronjob-2237/concurrent-27200981--1-r447k\" objectUID=9695a4cb-e16d-4bbd-9054-2722efffdd6b kind=\"Pod\" virtual=false\nI0919 13:42:34.058743       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200981\nI0919 13:42:34.060415       1 garbagecollector.go:475] \"Processing object\" object=\"cronjob-2237/concurrent-27200982--1-66mnr\" objectUID=12245c6a-987c-4c9c-86fa-63b1eb1c5695 kind=\"Pod\" virtual=false\nI0919 13:42:34.060711       1 job_controller.go:406] enqueueing job cronjob-2237/concurrent-27200982\nI0919 13:42:34.061696       1 garbagecollector.go:584] \"Deleting object\" object=\"cronjob-2237/concurrent-27200981--1-r447k\" objectUID=9695a4cb-e16d-4bbd-9054-2722efffdd6b kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:34.070795       1 garbagecollector.go:584] \"Deleting object\" object=\"cronjob-2237/concurrent-27200982--1-66mnr\" objectUID=12245c6a-987c-4c9c-86fa-63b1eb1c5695 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:34.122751       1 pv_controller_base.go:521] deletion of claim \"volume-421/pvc-82rmx\" was already processed\nI0919 13:42:34.244071       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-6797/inline-volume-tester-sskrf\" PVC=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\"\nI0919 13:42:34.244103       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\"\nI0919 13:42:34.322764       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-eaf459f8-d69a-4283-b52c-c06a7229131f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6797^4184b3fd-194f-11ec-a74d-3e36c5c5753f\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:42:34.326416       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-eaf459f8-d69a-4283-b52c-c06a7229131f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6797^4184b3fd-194f-11ec-a74d-3e36c5c5753f\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:42:34.450031       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\"\nI0919 13:42:34.455864       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797/inline-volume-tester-sskrf\" objectUID=46294d18-8882-426c-ae4e-2508c66c7e6b kind=\"Pod\" virtual=false\nI0919 13:42:34.458042       1 garbagecollector.go:594] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-6797, name: inline-volume-tester-sskrf, uid: 46294d18-8882-426c-ae4e-2508c66c7e6b]\nI0919 13:42:34.458104       1 pv_controller.go:640] volume \"pvc-eaf459f8-d69a-4283-b52c-c06a7229131f\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:42:34.461793       1 pv_controller.go:879] volume \"pvc-eaf459f8-d69a-4283-b52c-c06a7229131f\" entered phase \"Released\"\nI0919 13:42:34.464952       1 pv_controller.go:1340] isVolumeReleased[pvc-eaf459f8-d69a-4283-b52c-c06a7229131f]: volume is released\nI0919 13:42:34.480065       1 pv_controller.go:1340] isVolumeReleased[pvc-eaf459f8-d69a-4283-b52c-c06a7229131f]: volume is released\nI0919 13:42:34.492431       1 pv_controller_base.go:521] deletion of claim \"ephemeral-6797/inline-volume-tester-sskrf-my-volume-0\" was already processed\nI0919 13:42:34.783931       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2388/pvc-qx875\"\nI0919 13:42:34.791490       1 pv_controller.go:640] volume \"local-gc28x\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:42:34.796717       1 pv_controller.go:879] volume \"local-gc28x\" entered phase \"Released\"\nI0919 13:42:34.863388       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-eaf459f8-d69a-4283-b52c-c06a7229131f\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6797^4184b3fd-194f-11ec-a74d-3e36c5c5753f\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:42:34.900360       1 pv_controller_base.go:521] deletion of claim \"provisioning-2388/pvc-qx875\" was already processed\nE0919 13:42:34.917970       1 tokens_controller.go:262] error synchronizing serviceaccount firewall-test-5236/default: secrets \"default-token-xjbkw\" is forbidden: unable to create new content in namespace firewall-test-5236 because it is being terminated\nW0919 13:42:34.938223       1 reconciler.go:222] attacherDetacher.DetachVolume started for volume \"pvc-1fc70ad0-d925-49b9-a9c6-9d958c3e59f2\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-582^974aa59d-194e-11ec-882d-de44e361b19e\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" This volume is not safe to detach, but maxWaitForUnmountDuration 6m0s expired, force detaching\nI0919 13:42:35.338590       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1072/pod-95be43ce-6693-47e7-9da1-6c52eab44b07\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:35.338734       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:35.452226       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-1fc70ad0-d925-49b9-a9c6-9d958c3e59f2\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-582^974aa59d-194e-11ec-882d-de44e361b19e\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:35.509312       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-2506/pvc-t5slz\"\nI0919 13:42:35.517107       1 pv_controller.go:640] volume \"local-r6ncv\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:42:35.521181       1 pv_controller.go:879] volume \"local-r6ncv\" entered phase \"Released\"\nI0919 13:42:35.617261       1 pv_controller_base.go:521] deletion of claim \"provisioning-2506/pvc-t5slz\" was already processed\nI0919 13:42:35.929590       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-595/test-recreate-deployment-7f9bc4579c\" need=0 deleting=1\nI0919 13:42:35.929742       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-595/test-recreate-deployment-7f9bc4579c\" relatedReplicaSets=[test-recreate-deployment-7f9bc4579c]\nI0919 13:42:35.929861       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-recreate-deployment-7f9bc4579c\" pod=\"deployment-595/test-recreate-deployment-7f9bc4579c-tmf7n\"\nI0919 13:42:35.930118       1 event.go:294] \"Event occurred\" object=\"deployment-595/test-recreate-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-recreate-deployment-7f9bc4579c to 0\"\nI0919 13:42:35.938686       1 event.go:294] \"Event occurred\" object=\"deployment-595/test-recreate-deployment-7f9bc4579c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-recreate-deployment-7f9bc4579c-tmf7n\"\nI0919 13:42:35.939020       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-595/test-recreate-deployment-7f9bc4579c-tmf7n\" objectUID=d0b9bd66-9783-4efa-9539-973cc16d9970 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:35.944767       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-595/test-recreate-deployment-7f9bc4579c-tmf7n\" objectUID=d0b9bd66-9783-4efa-9539-973cc16d9970 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:35.955198       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-595/test-recreate-deployment-785fd889\" need=1 creating=1\nI0919 13:42:35.955757       1 event.go:294] \"Event occurred\" object=\"deployment-595/test-recreate-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-recreate-deployment-785fd889 to 1\"\nI0919 13:42:35.960399       1 event.go:294] \"Event occurred\" object=\"deployment-595/test-recreate-deployment-785fd889\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-recreate-deployment-785fd889-wpx8h\"\nI0919 13:42:35.969915       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-595/test-recreate-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-recreate-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:35.984526       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-595/test-recreate-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-recreate-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:36.533491       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1072/pod-95be43ce-6693-47e7-9da1-6c52eab44b07\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.533658       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.537665       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1072/pod-95be43ce-6693-47e7-9da1-6c52eab44b07\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.537783       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.540542       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1072/pod-a0f63860-47ba-4394-8a4a-aa681a06034e\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.541415       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.549951       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1072/pod-a0f63860-47ba-4394-8a4a-aa681a06034e\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.549985       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.559218       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1072/pod-a0f63860-47ba-4394-8a4a-aa681a06034e\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.559360       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.567790       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-1072/pvc-lbj8d\"\nI0919 13:42:36.574398       1 pv_controller.go:640] volume \"local-pv77c59\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:42:36.580112       1 pv_controller.go:879] volume \"local-pv77c59\" entered phase \"Released\"\nI0919 13:42:36.583449       1 pv_controller_base.go:521] deletion of claim \"persistent-local-volumes-test-1072/pvc-lbj8d\" was already processed\nI0919 13:42:36.893604       1 namespace_controller.go:185] Namespace has been deleted nettest-8041\nI0919 13:42:37.793545       1 namespace_controller.go:185] Namespace has been deleted provisioning-293-6045\nE0919 13:42:37.985345       1 tokens_controller.go:262] error synchronizing serviceaccount cronjob-9790/default: secrets \"default-token-7xw5h\" is forbidden: unable to create new content in namespace cronjob-9790 because it is being terminated\nE0919 13:42:38.078509       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nE0919 13:42:38.221728       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nI0919 13:42:38.305298       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6180/test-rs\" need=2 creating=1\nI0919 13:42:38.315972       1 event.go:294] \"Event occurred\" object=\"replicaset-6180/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-tjrbr\"\nE0919 13:42:38.427212       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nI0919 13:42:38.525296       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6180/test-rs\" need=4 creating=2\nI0919 13:42:38.531164       1 event.go:294] \"Event occurred\" object=\"replicaset-6180/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-h27sc\"\nI0919 13:42:38.536911       1 event.go:294] \"Event occurred\" object=\"replicaset-6180/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-l6smg\"\nE0919 13:42:38.631397       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nE0919 13:42:38.632627       1 tokens_controller.go:262] error synchronizing serviceaccount sysctl-3396/default: secrets \"default-token-btcp5\" is forbidden: unable to create new content in namespace sysctl-3396 because it is being terminated\nE0919 13:42:38.949870       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nI0919 13:42:38.997044       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9888-5152\nE0919 13:42:39.206768       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:39.209202       1 namespace_controller.go:185] Namespace has been deleted configmap-7729\nE0919 13:42:39.235651       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nE0919 13:42:39.248001       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:42:39.315204       1 tokens_controller.go:262] error synchronizing serviceaccount cronjob-2237/default: secrets \"default-token-xz7rn\" is forbidden: unable to create new content in namespace cronjob-2237 because it is being terminated\nE0919 13:42:39.496349       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:42:39.779180       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nE0919 13:42:39.844710       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nI0919 13:42:40.025264       1 namespace_controller.go:185] Namespace has been deleted firewall-test-5236\nE0919 13:42:40.041044       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:42:40.295375       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:42:40.303498       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nI0919 13:42:40.465513       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-5493/httpd-deployment-854f5f88d6\" need=2 creating=2\nI0919 13:42:40.465915       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-854f5f88d6 to 2\"\nI0919 13:42:40.482119       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment-854f5f88d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-854f5f88d6-f85j9\"\nI0919 13:42:40.492938       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment-854f5f88d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-854f5f88d6-f755n\"\nI0919 13:42:40.510247       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-5493/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:42:40.604862       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:42:41.322460       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:42:41.366676       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nI0919 13:42:41.369762       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0919 13:42:41.373856       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:42:41.384776       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0919 13:42:41.417813       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0919 13:42:41.453529       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-5493/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:41.569290       1 controller_ref_manager.go:232] patching pod replicaset-4431_pod-adoption-release to remove its controllerRef to apps/v1/ReplicaSet:pod-adoption-release\nI0919 13:42:41.585029       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4431/pod-adoption-release\" need=1 creating=1\nI0919 13:42:41.589057       1 garbagecollector.go:475] \"Processing object\" object=\"replicaset-4431/pod-adoption-release\" objectUID=bf8607c7-7540-44b3-aa87-552e316a0568 kind=\"ReplicaSet\" virtual=false\nI0919 13:42:41.624069       1 event.go:294] \"Event occurred\" object=\"replicaset-4431/pod-adoption-release\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pod-adoption-release-h2qhn\"\nI0919 13:42:41.650513       1 garbagecollector.go:514] object [apps/v1/ReplicaSet, namespace: replicaset-4431, name: pod-adoption-release, uid: bf8607c7-7540-44b3-aa87-552e316a0568]'s doesn't have an owner, continue on next item\nI0919 13:42:41.749165       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-595/test-recreate-deployment-785fd889\" need=1 creating=1\nI0919 13:42:41.759063       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-595/test-recreate-deployment-785fd889-wpx8h\" objectUID=160abd77-8e7d-483f-b883-fe46779a080a kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:41.777090       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-595/test-recreate-deployment-785fd889-wpx8h\" objectUID=160abd77-8e7d-483f-b883-fe46779a080a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:41.841074       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-595/test-recreate-deployment-7f9bc4579c\" objectUID=7a80aed2-580d-48a8-b3a5-760e89c763e2 kind=\"ReplicaSet\" virtual=false\nI0919 13:42:41.841448       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-595/test-recreate-deployment\"\nI0919 13:42:41.841705       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-595/test-recreate-deployment-785fd889\" objectUID=f30af9e2-5330-4e5e-a401-b029df3d4732 kind=\"ReplicaSet\" virtual=false\nI0919 13:42:41.898259       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-595/test-recreate-deployment-785fd889\" objectUID=f30af9e2-5330-4e5e-a401-b029df3d4732 kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:42:41.905669       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-595/test-recreate-deployment-7f9bc4579c\" objectUID=7a80aed2-580d-48a8-b3a5-760e89c763e2 kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:42:42.110315       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1557/pod-351d5fd3-d60b-4c2b-a8ea-33beab440a1e\" PVC=\"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:42:42.110343       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:42:42.254422       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1557/pod-351d5fd3-d60b-4c2b-a8ea-33beab440a1e\" PVC=\"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:42:42.255094       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:42:42.279747       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-1557/pvc-frrlx\"\nI0919 13:42:42.327841       1 pv_controller.go:640] volume \"local-pvjw7c6\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:42:42.333864       1 pv_controller.go:879] volume \"local-pvjw7c6\" entered phase \"Released\"\nI0919 13:42:42.347032       1 pv_controller_base.go:521] deletion of claim \"persistent-local-volumes-test-1557/pvc-frrlx\" was already processed\nI0919 13:42:42.425316       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-5493/httpd-deployment-854f5f88d6\" need=3 creating=1\nI0919 13:42:42.431496       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment-854f5f88d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-854f5f88d6-dbw9n\"\nI0919 13:42:42.431527       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-854f5f88d6 to 3\"\nE0919 13:42:42.546950       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:42:42.579161       1 tokens_controller.go:262] error synchronizing serviceaccount events-5954/default: secrets \"default-token-tklxh\" is forbidden: unable to create new content in namespace events-5954 because it is being terminated\nI0919 13:42:42.997411       1 namespace_controller.go:185] Namespace has been deleted cronjob-9790\nE0919 13:42:43.108533       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nE0919 13:42:43.168687       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-7491/pvc-t44fp: storageclass.storage.k8s.io \"provisioning-7491\" not found\nI0919 13:42:43.168994       1 event.go:294] \"Event occurred\" object=\"provisioning-7491/pvc-t44fp\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7491\\\" not found\"\nI0919 13:42:43.288499       1 pv_controller.go:879] volume \"local-5tgbh\" entered phase \"Available\"\nI0919 13:42:43.314829       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf\" need=1 creating=1\nI0919 13:42:43.317045       1 event.go:294] \"Event occurred\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-b49d8b4cf to 1\"\nE0919 13:42:43.333539       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-1072/default: secrets \"default-token-r7gq9\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-1072 because it is being terminated\nI0919 13:42:43.334690       1 event.go:294] \"Event occurred\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-b49d8b4cf-hcrjq\"\nI0919 13:42:43.354377       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:43.387788       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:43.401450       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:43.615807       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-5493/httpd-deployment-5dbf858bdf\" need=1 creating=1\nI0919 13:42:43.626647       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-5dbf858bdf to 1\"\nI0919 13:42:43.634367       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment-5dbf858bdf\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-5dbf858bdf-mwds8\"\nI0919 13:42:43.647659       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-5493/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:42:43.711260       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nI0919 13:42:43.818897       1 namespace_controller.go:185] Namespace has been deleted sysctl-3396\nI0919 13:42:43.934063       1 stateful_set_control.go:555] StatefulSet statefulset-7255/ss2 terminating Pod ss2-2 for update\nI0919 13:42:43.970156       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-6515/sample-webhook-deployment-8f89dbb55\" need=1 creating=1\nI0919 13:42:43.977477       1 event.go:294] \"Event occurred\" object=\"webhook-6515/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-8f89dbb55 to 1\"\nI0919 13:42:44.010779       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0919 13:42:44.076153       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-6515/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:44.077447       1 event.go:294] \"Event occurred\" object=\"webhook-6515/sample-webhook-deployment-8f89dbb55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-8f89dbb55-ndn2g\"\nI0919 13:42:44.228611       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-6180/test-rs\" need=4 creating=1\nI0919 13:42:44.288829       1 garbagecollector.go:475] \"Processing object\" object=\"replicaset-6180/test-rs-h7jhz\" objectUID=cf5af463-3ed4-4d8f-ac45-26c65e3de9a1 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:44.295944       1 garbagecollector.go:584] \"Deleting object\" object=\"replicaset-6180/test-rs-h7jhz\" objectUID=cf5af463-3ed4-4d8f-ac45-26c65e3de9a1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:44.329185       1 garbagecollector.go:475] \"Processing object\" object=\"replicaset-6180/test-rs-tjrbr\" objectUID=7590a38e-8a21-477d-a807-56ad3f1b6808 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:44.343137       1 garbagecollector.go:584] \"Deleting object\" object=\"replicaset-6180/test-rs-tjrbr\" objectUID=7590a38e-8a21-477d-a807-56ad3f1b6808 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0919 13:42:44.646330       1 tokens_controller.go:262] error synchronizing serviceaccount emptydir-6493/default: serviceaccounts \"default\" not found\nI0919 13:42:44.658481       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nI0919 13:42:44.704940       1 garbagecollector.go:475] \"Processing object\" object=\"container-probe-3452/liveness-5fccc32e-2af6-411d-8791-3d35d44b33c7\" objectUID=5c63c442-adfe-423d-b8a4-e1c9131a85c4 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:44.715153       1 garbagecollector.go:584] \"Deleting object\" object=\"container-probe-3452/liveness-5fccc32e-2af6-411d-8791-3d35d44b33c7\" objectUID=5c63c442-adfe-423d-b8a4-e1c9131a85c4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:44.727039       1 namespace_controller.go:185] Namespace has been deleted cronjob-2237\nE0919 13:42:44.941419       1 tokens_controller.go:262] error synchronizing serviceaccount replicaset-6180/default: secrets \"default-token-bnkg7\" is forbidden: unable to create new content in namespace replicaset-6180 because it is being terminated\nI0919 13:42:45.225256       1 event.go:294] \"Event occurred\" object=\"volume-expand-3640/awsw7gvj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:42:45.264589       1 pv_controller.go:879] volume \"pvc-9805db34-459e-43a1-a2aa-df3be5414ce5\" entered phase \"Bound\"\nI0919 13:42:45.265266       1 pv_controller.go:982] volume \"pvc-9805db34-459e-43a1-a2aa-df3be5414ce5\" bound to claim \"statefulset-8088/datadir-ss-0\"\nI0919 13:42:45.285688       1 pv_controller.go:823] claim \"statefulset-8088/datadir-ss-0\" entered phase \"Bound\"\nE0919 13:42:45.434277       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-5698/default: secrets \"default-token-l68x5\" is forbidden: unable to create new content in namespace disruption-5698 because it is being terminated\nE0919 13:42:45.461189       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nI0919 13:42:45.462577       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-9805db34-459e-43a1-a2aa-df3be5414ce5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02f6f81953a6120c2\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:45.465059       1 event.go:294] \"Event occurred\" object=\"volume-expand-3640/awsw7gvj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE0919 13:42:45.489297       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8929/pvc-rmgct: storageclass.storage.k8s.io \"provisioning-8929\" not found\nI0919 13:42:45.489662       1 event.go:294] \"Event occurred\" object=\"provisioning-8929/pvc-rmgct\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8929\\\" not found\"\nI0919 13:42:45.531663       1 namespace_controller.go:185] Namespace has been deleted volume-421\nI0919 13:42:45.599411       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"kubectl-5493/httpd-deployment-854f5f88d6\" need=2 deleting=1\nI0919 13:42:45.599449       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"kubectl-5493/httpd-deployment-854f5f88d6\" relatedReplicaSets=[httpd-deployment-5dbf858bdf httpd-deployment-854f5f88d6]\nI0919 13:42:45.600243       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set httpd-deployment-854f5f88d6 to 2\"\nI0919 13:42:45.600403       1 controller_utils.go:592] \"Deleting pod\" controller=\"httpd-deployment-854f5f88d6\" pod=\"kubectl-5493/httpd-deployment-854f5f88d6-dbw9n\"\nE0919 13:42:45.605811       1 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"httpd-deployment.16a63cfb7113c048\", GenerateName:\"\", Namespace:\"kubectl-5493\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kubectl-5493\", Name:\"httpd-deployment\", UID:\"18157f56-dad8-4543-81a4-7768503a9923\", APIVersion:\"apps/v1\", ResourceVersion:\"23665\", FieldPath:\"\"}, Reason:\"ScalingReplicaSet\", Message:\"Scaled down replica set httpd-deployment-854f5f88d6 to 2\", Source:v1.EventSource{Component:\"deployment-controller\", Host:\"\"}, FirstTimestamp:time.Date(2021, time.September, 19, 13, 42, 45, 599961160, time.Local), LastTimestamp:time.Date(2021, time.September, 19, 13, 42, 45, 599961160, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"httpd-deployment.16a63cfb7113c048\" is forbidden: unable to create new content in namespace kubectl-5493 because it is being terminated' (will not retry!)\nI0919 13:42:45.618010       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment-854f5f88d6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: httpd-deployment-854f5f88d6-dbw9n\"\nI0919 13:42:45.619425       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-5493/httpd-deployment-5dbf858bdf\" need=2 creating=1\nE0919 13:42:45.625066       1 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"httpd-deployment-854f5f88d6.16a63cfb7221ba77\", GenerateName:\"\", Namespace:\"kubectl-5493\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"kubectl-5493\", Name:\"httpd-deployment-854f5f88d6\", UID:\"b3560fea-3e0b-4f00-9f4a-452b5e3fa7d9\", APIVersion:\"apps/v1\", ResourceVersion:\"23828\", FieldPath:\"\"}, Reason:\"SuccessfulDelete\", Message:\"Deleted pod: httpd-deployment-854f5f88d6-dbw9n\", Source:v1.EventSource{Component:\"replicaset-controller\", Host:\"\"}, FirstTimestamp:time.Date(2021, time.September, 19, 13, 42, 45, 617654391, time.Local), LastTimestamp:time.Date(2021, time.September, 19, 13, 42, 45, 617654391, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"httpd-deployment-854f5f88d6.16a63cfb7221ba77\" is forbidden: unable to create new content in namespace kubectl-5493 because it is being terminated' (will not retry!)\nI0919 13:42:45.625547       1 event.go:294] \"Event occurred\" object=\"kubectl-5493/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-5dbf858bdf to 2\"\nE0919 13:42:45.630522       1 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"httpd-deployment.16a63cfb7296ebae\", GenerateName:\"\", Namespace:\"kubectl-5493\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Deployment\", Namespace:\"kubectl-5493\", Name:\"httpd-deployment\", UID:\"18157f56-dad8-4543-81a4-7768503a9923\", APIVersion:\"apps/v1\", ResourceVersion:\"23665\", FieldPath:\"\"}, Reason:\"ScalingReplicaSet\", Message:\"Scaled up replica set httpd-deployment-5dbf858bdf to 2\", Source:v1.EventSource{Component:\"deployment-controller\", Host:\"\"}, FirstTimestamp:time.Date(2021, time.September, 19, 13, 42, 45, 625334702, time.Local), LastTimestamp:time.Date(2021, time.September, 19, 13, 42, 45, 625334702, time.Local), Count:1, Type:\"Normal\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"httpd-deployment.16a63cfb7296ebae\" is forbidden: unable to create new content in namespace kubectl-5493 because it is being terminated' (will not retry!)\nI0919 13:42:45.640776       1 pv_controller.go:879] volume \"local-rw9d4\" entered phase \"Available\"\nI0919 13:42:45.640931       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-5493/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:45.655296       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"kubectl-5493/httpd-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"httpd-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:42:45.807180       1 namespace_controller.go:162] deletion of namespace e2e-privileged-pod-4559 failed: unexpected items still remain in namespace: e2e-privileged-pod-4559 for gvr: /v1, Resource=pods\nI0919 13:42:46.269601       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" need=0 deleting=1\nI0919 13:42:46.269816       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" relatedReplicaSets=[test-rolling-update-with-lb-6bb6c4d99b test-rolling-update-with-lb-8c8cdc96d test-rolling-update-with-lb-9c68d7c8b test-rolling-update-with-lb-945c6c889]\nI0919 13:42:46.269993       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-945c6c889\" pod=\"deployment-6925/test-rolling-update-with-lb-945c6c889-jfkk5\"\nI0919 13:42:46.275933       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-945c6c889 to 0\"\nI0919 13:42:46.292545       1 event.go:294] \"Event occurred\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-945c6c889-jfkk5\"\nI0919 13:42:46.295293       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889-jfkk5\" objectUID=c72f4348-5e87-4e5d-a724-98df8489f14f kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:46.313635       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6925/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:46.332372       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-6925/test-rolling-update-with-lb-945c6c889-jfkk5\" objectUID=c72f4348-5e87-4e5d-a724-98df8489f14f kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0919 13:42:46.368501       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-8762/pvc-mwvnw: storageclass.storage.k8s.io \"volume-8762\" not found\nI0919 13:42:46.368833       1 event.go:294] \"Event occurred\" object=\"volume-8762/pvc-mwvnw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-8762\\\" not found\"\nI0919 13:42:46.493840       1 pv_controller.go:879] volume \"local-qqvwc\" entered phase \"Available\"\nE0919 13:42:47.503100       1 tokens_controller.go:262] error synchronizing serviceaccount kubelet-test-7512/default: secrets \"default-token-fjkwm\" is forbidden: unable to create new content in namespace kubelet-test-7512 because it is being terminated\nI0919 13:42:47.607810       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6797\nI0919 13:42:47.620831       1 namespace_controller.go:185] Namespace has been deleted events-5954\nI0919 13:42:47.710158       1 namespace_controller.go:185] Namespace has been deleted deployment-595\nI0919 13:42:47.791490       1 namespace_controller.go:185] Namespace has been deleted provisioning-2506\nI0919 13:42:47.830210       1 stateful_set_control.go:555] StatefulSet statefulset-7255/ss2 terminating Pod ss2-1 for update\nE0919 13:42:47.841470       1 tokens_controller.go:262] error synchronizing serviceaccount secret-namespace-9429/default: secrets \"default-token-8rwb5\" is forbidden: unable to create new content in namespace secret-namespace-9429 because it is being terminated\nI0919 13:42:47.851910       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-1 in StatefulSet ss2 successful\"\nI0919 13:42:47.961041       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4431/pod-adoption-release\" need=1 creating=1\nE0919 13:42:48.027485       1 tokens_controller.go:262] error synchronizing serviceaccount replicaset-4431/default: secrets \"default-token-9lxbw\" is forbidden: unable to create new content in namespace replicaset-4431 because it is being terminated\nI0919 13:42:48.103852       1 namespace_controller.go:185] Namespace has been deleted provisioning-2388\nI0919 13:42:48.185527       1 namespace_controller.go:185] Namespace has been deleted emptydir-3982\nE0919 13:42:48.248258       1 tokens_controller.go:262] error synchronizing serviceaccount projected-8579/default: secrets \"default-token-p9jqb\" is forbidden: unable to create new content in namespace projected-8579 because it is being terminated\nI0919 13:42:48.312151       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-9805db34-459e-43a1-a2aa-df3be5414ce5\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02f6f81953a6120c2\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:48.312477       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-9805db34-459e-43a1-a2aa-df3be5414ce5\\\" \"\nE0919 13:42:48.394741       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nI0919 13:42:48.496802       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1072\nW0919 13:42:48.617264       1 reconciler.go:222] attacherDetacher.DetachVolume started for volume \"pvc-2f3c02d0-34e0-4e11-b60e-9cc9979a286a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-996^9ab1d59e-194e-11ec-9419-82b2cee93edf\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" This volume is not safe to detach, but maxWaitForUnmountDuration 6m0s expired, force detaching\nI0919 13:42:48.691993       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-6515/e2e-test-webhook-zhhjq\" objectUID=288b2f7b-ec21-43eb-a967-f27a015e3864 kind=\"EndpointSlice\" virtual=false\nI0919 13:42:48.701743       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-6515/e2e-test-webhook-zhhjq\" objectUID=288b2f7b-ec21-43eb-a967-f27a015e3864 kind=\"EndpointSlice\" propagationPolicy=Background\nI0919 13:42:48.838659       1 pv_controller.go:879] volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" entered phase \"Bound\"\nI0919 13:42:48.838707       1 pv_controller.go:982] volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" bound to claim \"volume-expand-3640/awsw7gvj\"\nI0919 13:42:48.843534       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-6515/sample-webhook-deployment-8f89dbb55\" objectUID=53471523-a16b-4e3a-8092-6bd50b0e051e kind=\"ReplicaSet\" virtual=false\nI0919 13:42:48.843871       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-6515/sample-webhook-deployment\"\nI0919 13:42:48.856960       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-6515/sample-webhook-deployment-8f89dbb55\" objectUID=53471523-a16b-4e3a-8092-6bd50b0e051e kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:42:48.862925       1 pv_controller.go:823] claim \"volume-expand-3640/awsw7gvj\" entered phase \"Bound\"\nI0919 13:42:48.868697       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-6515/sample-webhook-deployment-8f89dbb55-ndn2g\" objectUID=75f8c724-804a-4ec4-9f43-0266010b6e72 kind=\"Pod\" virtual=false\nI0919 13:42:48.872955       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-6515/sample-webhook-deployment-8f89dbb55-ndn2g\" objectUID=75f8c724-804a-4ec4-9f43-0266010b6e72 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:48.884976       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-6515/sample-webhook-deployment-8f89dbb55-ndn2g\" objectUID=f3534fbd-5a64-4e62-be69-1d2d78807aae kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:48.888948       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-6515/sample-webhook-deployment-8f89dbb55-ndn2g\" objectUID=f3534fbd-5a64-4e62-be69-1d2d78807aae kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:49.130737       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2f3c02d0-34e0-4e11-b60e-9cc9979a286a\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-996^9ab1d59e-194e-11ec-9419-82b2cee93edf\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:49.523684       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-025c3a1c373704517\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:49.537680       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797-105/csi-hostpathplugin-0\" objectUID=3c51501b-19b6-4d92-af6b-ade27d15741a kind=\"Pod\" virtual=false\nI0919 13:42:49.537959       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-6797-105/csi-hostpathplugin\nI0919 13:42:49.538015       1 garbagecollector.go:475] \"Processing object\" object=\"ephemeral-6797-105/csi-hostpathplugin-7bb689f4f\" objectUID=b20df801-8517-4f7d-bb42-0fe5fd3f4489 kind=\"ControllerRevision\" virtual=false\nI0919 13:42:49.543579       1 garbagecollector.go:584] \"Deleting object\" object=\"ephemeral-6797-105/csi-hostpathplugin-0\" objectUID=3c51501b-19b6-4d92-af6b-ade27d15741a kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:49.543648       1 garbagecollector.go:584] \"Deleting object\" object=\"ephemeral-6797-105/csi-hostpathplugin-7bb689f4f\" objectUID=b20df801-8517-4f7d-bb42-0fe5fd3f4489 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:42:49.594475       1 garbagecollector.go:475] \"Processing object\" object=\"kubectl-5493/httpd-deployment-5dbf858bdf-mwds8\" objectUID=0ed3aaa5-3029-445b-81a2-5b1e230fb94e kind=\"Pod\" virtual=false\nI0919 13:42:49.598548       1 garbagecollector.go:584] \"Deleting object\" object=\"kubectl-5493/httpd-deployment-5dbf858bdf-mwds8\" objectUID=0ed3aaa5-3029-445b-81a2-5b1e230fb94e kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:49.600316       1 garbagecollector.go:475] \"Processing object\" object=\"kubectl-5493/httpd-deployment-854f5f88d6-f85j9\" objectUID=a7832913-f864-4563-878d-4d2bc5f98568 kind=\"Pod\" virtual=false\nI0919 13:42:49.600572       1 garbagecollector.go:475] \"Processing object\" object=\"kubectl-5493/httpd-deployment-854f5f88d6-f755n\" objectUID=4a435316-140e-4ae0-adbe-44c58c4dd63e kind=\"Pod\" virtual=false\nI0919 13:42:49.604855       1 garbagecollector.go:584] \"Deleting object\" object=\"kubectl-5493/httpd-deployment-854f5f88d6-f755n\" objectUID=4a435316-140e-4ae0-adbe-44c58c4dd63e kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:49.610532       1 garbagecollector.go:584] \"Deleting object\" object=\"kubectl-5493/httpd-deployment-854f5f88d6-f85j9\" objectUID=a7832913-f864-4563-878d-4d2bc5f98568 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:49.792274       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"kubectl-5493/httpd-deployment\"\nI0919 13:42:49.832774       1 namespace_controller.go:185] Namespace has been deleted emptydir-6493\nE0919 13:42:49.984963       1 tokens_controller.go:262] error synchronizing serviceaccount volume-9820/default: secrets \"default-token-ws252\" is forbidden: unable to create new content in namespace volume-9820 because it is being terminated\nI0919 13:42:50.161057       1 namespace_controller.go:185] Namespace has been deleted dns-892\nI0919 13:42:50.300107       1 namespace_controller.go:185] Namespace has been deleted replicaset-6180\nE0919 13:42:50.611102       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-5030/pvc-ckxtc: storageclass.storage.k8s.io \"volume-5030\" not found\nI0919 13:42:50.611709       1 event.go:294] \"Event occurred\" object=\"volume-5030/pvc-ckxtc\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-5030\\\" not found\"\nI0919 13:42:50.814870       1 pv_controller.go:879] volume \"aws-mgqr9\" entered phase \"Available\"\nE0919 13:42:51.119406       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-6876/default: secrets \"default-token-vfk4p\" is forbidden: unable to create new content in namespace volumemode-6876 because it is being terminated\nI0919 13:42:51.772431       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-025c3a1c373704517\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:51.772710       1 event.go:294] \"Event occurred\" object=\"volume-expand-3640/pod-e0059b06-df7b-4f66-87fd-b79e0c99ad5a\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\\\" \"\nE0919 13:42:52.137166       1 tokens_controller.go:262] error synchronizing serviceaccount subpath-2088/default: secrets \"default-token-brq4n\" is forbidden: unable to create new content in namespace subpath-2088 because it is being terminated\nI0919 13:42:52.546375       1 pv_controller.go:930] claim \"provisioning-8929/pvc-rmgct\" bound to volume \"local-rw9d4\"\nI0919 13:42:52.560970       1 pv_controller.go:879] volume \"local-rw9d4\" entered phase \"Bound\"\nI0919 13:42:52.561009       1 pv_controller.go:982] volume \"local-rw9d4\" bound to claim \"provisioning-8929/pvc-rmgct\"\nI0919 13:42:52.570939       1 pv_controller.go:823] claim \"provisioning-8929/pvc-rmgct\" entered phase \"Bound\"\nI0919 13:42:52.571205       1 pv_controller.go:930] claim \"volume-5030/pvc-ckxtc\" bound to volume \"aws-mgqr9\"\nI0919 13:42:52.585196       1 pv_controller.go:879] volume \"aws-mgqr9\" entered phase \"Bound\"\nI0919 13:42:52.585225       1 pv_controller.go:982] volume \"aws-mgqr9\" bound to claim \"volume-5030/pvc-ckxtc\"\nI0919 13:42:52.598645       1 pv_controller.go:823] claim \"volume-5030/pvc-ckxtc\" entered phase \"Bound\"\nI0919 13:42:52.599035       1 pv_controller.go:930] claim \"provisioning-7491/pvc-t44fp\" bound to volume \"local-5tgbh\"\nI0919 13:42:52.626131       1 pv_controller.go:879] volume \"local-5tgbh\" entered phase \"Bound\"\nI0919 13:42:52.626159       1 pv_controller.go:982] volume \"local-5tgbh\" bound to claim \"provisioning-7491/pvc-t44fp\"\nI0919 13:42:52.643622       1 pv_controller.go:823] claim \"provisioning-7491/pvc-t44fp\" entered phase \"Bound\"\nI0919 13:42:52.643887       1 pv_controller.go:930] claim \"volume-8762/pvc-mwvnw\" bound to volume \"local-qqvwc\"\nI0919 13:42:52.669558       1 pv_controller.go:879] volume \"local-qqvwc\" entered phase \"Bound\"\nI0919 13:42:52.669770       1 pv_controller.go:982] volume \"local-qqvwc\" bound to claim \"volume-8762/pvc-mwvnw\"\nI0919 13:42:52.719520       1 pv_controller.go:823] claim \"volume-8762/pvc-mwvnw\" entered phase \"Bound\"\nI0919 13:42:52.943405       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-1639/pause-pod-68b8b8c7bc\" need=2 creating=2\nI0919 13:42:52.959674       1 event.go:294] \"Event occurred\" object=\"services-1639/pause-pod\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set pause-pod-68b8b8c7bc to 2\"\nI0919 13:42:53.008827       1 event.go:294] \"Event occurred\" object=\"services-1639/pause-pod-68b8b8c7bc\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pause-pod-68b8b8c7bc-s5qdd\"\nI0919 13:42:53.036377       1 event.go:294] \"Event occurred\" object=\"services-1639/pause-pod-68b8b8c7bc\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: pause-pod-68b8b8c7bc-z687k\"\nI0919 13:42:53.036665       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:53.045149       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"services-1639/pause-pod\" err=\"Operation cannot be fulfilled on deployments.apps \\\"pause-pod\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:42:53.081617       1 event.go:294] \"Event occurred\" object=\"job-2004/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: exceed-active-deadline--1-tw44v\"\nI0919 13:42:53.083388       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:53.100945       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:53.108927       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:53.126779       1 event.go:294] \"Event occurred\" object=\"job-2004/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: exceed-active-deadline--1-snsz6\"\nI0919 13:42:53.137296       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:53.139498       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:53.157044       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:53.223274       1 namespace_controller.go:185] Namespace has been deleted secret-namespace-9429\nI0919 13:42:53.312835       1 namespace_controller.go:185] Namespace has been deleted projected-8579\nI0919 13:42:53.694840       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-mgqr9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-080983315c6e662b5\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:53.699931       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-5025-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-7312-crds stable.example.com/v2, Resource=e2e-test-crd-webhook-6863-crds], removed: []\nI0919 13:42:53.700531       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-webhook-6863-crds.stable.example.com\nI0919 13:42:53.700632       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-5025-crds.crd-publish-openapi-test-common-group.example.com\nI0919 13:42:53.700676       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-7312-crds.crd-publish-openapi-test-common-group.example.com\nI0919 13:42:53.700772       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nE0919 13:42:53.761388       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-4412/default: secrets \"default-token-4l64f\" is forbidden: unable to create new content in namespace kubectl-4412 because it is being terminated\nI0919 13:42:53.770027       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9644/webserver-56fb65c6f6\" need=8 creating=1\nI0919 13:42:53.801836       1 shared_informer.go:247] Caches are synced for resource quota \nI0919 13:42:53.802034       1 resource_quota_controller.go:454] synced quota controller\nE0919 13:42:53.918230       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-6515/default: secrets \"default-token-4ffhx\" is forbidden: unable to create new content in namespace webhook-6515 because it is being terminated\nI0919 13:42:53.965538       1 garbagecollector.go:217] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-5025-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-7312-crds stable.example.com/v2, Resource=e2e-test-crd-webhook-6863-crds], removed: [mygroup.example.com/v1beta1, Resource=noxus]\nI0919 13:42:54.039335       1 controller_utils.go:592] \"Deleting pod\" controller=\"exceed-active-deadline\" pod=\"job-2004/exceed-active-deadline--1-tw44v\"\nI0919 13:42:54.040326       1 controller_utils.go:592] \"Deleting pod\" controller=\"exceed-active-deadline\" pod=\"job-2004/exceed-active-deadline--1-snsz6\"\nI0919 13:42:54.051148       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0919 13:42:54.051348       1 shared_informer.go:247] Caches are synced for garbage collector \nI0919 13:42:54.051420       1 garbagecollector.go:258] synced garbage collector\nI0919 13:42:54.060323       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-5c557bc5bf\" objectUID=55f3156c-49b0-4251-8f24-a0e5f322c4ae kind=\"ReplicaSet\" virtual=false\nI0919 13:42:54.060710       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-9644/webserver\"\nI0919 13:42:54.060846       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-56fb65c6f6\" objectUID=0c9b7a6b-969a-4f23-82d5-1a418e4484d9 kind=\"ReplicaSet\" virtual=false\nI0919 13:42:54.061052       1 garbagecollector.go:475] \"Processing object\" object=\"deployment-9644/webserver-6756b7b6d4\" objectUID=48f943c9-2493-4570-bf07-464dd12bce17 kind=\"ReplicaSet\" virtual=false\nI0919 13:42:54.109788       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-56fb65c6f6\" objectUID=0c9b7a6b-969a-4f23-82d5-1a418e4484d9 kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:42:54.113970       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-5c557bc5bf\" objectUID=55f3156c-49b0-4251-8f24-a0e5f322c4ae kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:42:54.114359       1 garbagecollector.go:584] \"Deleting object\" object=\"deployment-9644/webserver-6756b7b6d4\" objectUID=48f943c9-2493-4570-bf07-464dd12bce17 kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:42:54.136624       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:54.140537       1 event.go:294] \"Event occurred\" object=\"job-2004/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: exceed-active-deadline--1-snsz6\"\nI0919 13:42:54.186618       1 event.go:294] \"Event occurred\" object=\"job-2004/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: exceed-active-deadline--1-tw44v\"\nI0919 13:42:54.186789       1 event.go:294] \"Event occurred\" object=\"job-2004/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"DeadlineExceeded\" message=\"Job was active longer than specified deadline\"\nI0919 13:42:54.187116       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:54.217812       1 event.go:294] \"Event occurred\" object=\"job-2004/exceed-active-deadline\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"DeadlineExceeded\" message=\"Job was active longer than specified deadline\"\nI0919 13:42:54.221520       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:54.264127       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nE0919 13:42:54.328902       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:54.494555       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nE0919 13:42:54.526837       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0919 13:42:54.861079       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nI0919 13:42:55.113268       1 event.go:294] \"Event occurred\" object=\"volume-expand-8655/aws7jjr9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0919 13:42:55.250028       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:55.287833       1 event.go:294] \"Event occurred\" object=\"statefulset-1762/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-0 in StatefulSet test-ss successful\"\nE0919 13:42:55.318575       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:55.324316       1 garbagecollector.go:475] \"Processing object\" object=\"crd-webhook-5165/e2e-test-crd-conversion-webhook-wxb2f\" objectUID=5ffb51fe-6420-4329-a1af-f74e8a50f8f1 kind=\"EndpointSlice\" virtual=false\nI0919 13:42:55.338645       1 garbagecollector.go:584] \"Deleting object\" object=\"crd-webhook-5165/e2e-test-crd-conversion-webhook-wxb2f\" objectUID=5ffb51fe-6420-4329-a1af-f74e8a50f8f1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0919 13:42:55.367085       1 event.go:294] \"Event occurred\" object=\"volume-expand-8655/aws7jjr9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0919 13:42:55.370222       1 namespace_controller.go:185] Namespace has been deleted volume-9820\nI0919 13:42:55.523237       1 garbagecollector.go:475] \"Processing object\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf\" objectUID=21ed735f-2b15-4504-873d-5e7b7cd4b810 kind=\"ReplicaSet\" virtual=false\nI0919 13:42:55.523483       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment\"\nI0919 13:42:55.528958       1 garbagecollector.go:584] \"Deleting object\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf\" objectUID=21ed735f-2b15-4504-873d-5e7b7cd4b810 kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:42:55.552467       1 garbagecollector.go:475] \"Processing object\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf-hcrjq\" objectUID=6181920b-db9f-44bb-9504-c399b399bef5 kind=\"Pod\" virtual=false\nI0919 13:42:55.560621       1 namespace_controller.go:185] Namespace has been deleted container-probe-3452\nI0919 13:42:55.566320       1 garbagecollector.go:584] \"Deleting object\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf-hcrjq\" objectUID=6181920b-db9f-44bb-9504-c399b399bef5 kind=\"Pod\" propagationPolicy=Background\nI0919 13:42:55.622365       1 garbagecollector.go:475] \"Processing object\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf-hcrjq\" objectUID=25f2038c-cc72-4ca9-9b78-abf8615485c4 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:42:55.648656       1 garbagecollector.go:584] \"Deleting object\" object=\"crd-webhook-5165/sample-crd-conversion-webhook-deployment-b49d8b4cf-hcrjq\" objectUID=25f2038c-cc72-4ca9-9b78-abf8615485c4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:42:55.711091       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:55.921012       1 namespace_controller.go:185] Namespace has been deleted disruption-5698\nI0919 13:42:56.145471       1 namespace_controller.go:185] Namespace has been deleted e2e-privileged-pod-4559\nI0919 13:42:56.236201       1 namespace_controller.go:185] Namespace has been deleted volumemode-6876\nI0919 13:42:56.324231       1 namespace_controller.go:185] Namespace has been deleted kubelet-2164\nI0919 13:42:56.506863       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:42:56.512688       1 event.go:294] \"Event occurred\" object=\"job-4069/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-zqx9p\"\nI0919 13:42:56.514311       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:42:56.517465       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:42:56.517967       1 event.go:294] \"Event occurred\" object=\"job-4069/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-7dbvw\"\nI0919 13:42:56.520578       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:42:56.521791       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:42:56.528463       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nE0919 13:42:56.603109       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:56.670477       1 namespace_controller.go:185] Namespace has been deleted nettest-7505\nI0919 13:42:56.779704       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-958\nE0919 13:42:57.357319       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-5415/default: secrets \"default-token-zb5g9\" is forbidden: unable to create new content in namespace disruption-5415 because it is being terminated\nI0919 13:42:57.363239       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-mgqr9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-080983315c6e662b5\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:42:57.363425       1 event.go:294] \"Event occurred\" object=\"volume-5030/exec-volume-test-preprovisionedpv-kwx7\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-mgqr9\\\" \"\nI0919 13:42:57.511584       1 namespace_controller.go:185] Namespace has been deleted subpath-2088\nE0919 13:42:57.694764       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:57.711702       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nE0919 13:42:58.163960       1 tokens_controller.go:262] error synchronizing serviceaccount hostpath-2462/default: secrets \"default-token-ftkx9\" is forbidden: unable to create new content in namespace hostpath-2462 because it is being terminated\nI0919 13:42:58.457385       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2725/pod-6801315b-1e0d-4f24-825b-139c89d6d287\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:42:58.457419       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nE0919 13:42:58.796903       1 tokens_controller.go:262] error synchronizing serviceaccount crd-watch-2397/default: secrets \"default-token-qf7q2\" is forbidden: unable to create new content in namespace crd-watch-2397 because it is being terminated\nI0919 13:42:58.882188       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:42:58.908367       1 pv_controller.go:879] volume \"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\" entered phase \"Bound\"\nI0919 13:42:58.908408       1 pv_controller.go:982] volume \"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\" bound to claim \"volume-expand-8655/aws7jjr9\"\nI0919 13:42:58.917078       1 pv_controller.go:823] claim \"volume-expand-8655/aws7jjr9\" entered phase \"Bound\"\nE0919 13:42:58.996797       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:59.286346       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nE0919 13:42:59.317438       1 tokens_controller.go:262] error synchronizing serviceaccount apf-5954/default: secrets \"default-token-4hh78\" is forbidden: unable to create new content in namespace apf-5954 because it is being terminated\nI0919 13:42:59.381278       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0aa21741d44293999\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nE0919 13:42:59.480599       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:42:59.587694       1 namespace_controller.go:185] Namespace has been deleted deployment-9644\nE0919 13:42:59.757295       1 tokens_controller.go:262] error synchronizing serviceaccount projected-4830/default: secrets \"default-token-9jl7r\" is forbidden: unable to create new content in namespace projected-4830 because it is being terminated\nI0919 13:42:59.802499       1 namespace_controller.go:185] Namespace has been deleted replicaset-4431\nI0919 13:42:59.842705       1 namespace_controller.go:185] Namespace has been deleted apply-4144\nI0919 13:42:59.905990       1 namespace_controller.go:185] Namespace has been deleted webhook-6515-markers\nI0919 13:42:59.918419       1 namespace_controller.go:185] Namespace has been deleted kubectl-4412\nI0919 13:42:59.957760       1 namespace_controller.go:185] Namespace has been deleted webhook-6515\nI0919 13:43:00.104981       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:00.158162       1 event.go:294] \"Event occurred\" object=\"cronjob-793/replace\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job replace-27200983\"\nI0919 13:43:00.158435       1 job_controller.go:406] enqueueing job cronjob-793/replace-27200983\nI0919 13:43:00.162733       1 job_controller.go:406] enqueueing job cronjob-2639/forbid-27200983\nI0919 13:43:00.168657       1 event.go:294] \"Event occurred\" object=\"cronjob-2639/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job forbid-27200983\"\nI0919 13:43:00.208592       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-793/replace\" resourceVersion=\"23420\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"replace\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:43:00.209356       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-793/replace, requeuing: Operation cannot be fulfilled on cronjobs.batch \"replace\": the object has been modified; please apply your changes to the latest version and try again\nI0919 13:43:00.209296       1 event.go:294] \"Event occurred\" object=\"cronjob-2639/forbid-27200983\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: forbid-27200983--1-zxtb7\"\nI0919 13:43:00.212929       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-2639/forbid\" resourceVersion=\"24122\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"forbid\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:43:00.212952       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-2639/forbid, requeuing: Operation cannot be fulfilled on cronjobs.batch \"forbid\": the object has been modified; please apply your changes to the latest version and try again\nI0919 13:43:00.213297       1 job_controller.go:406] enqueueing job cronjob-2639/forbid-27200983\nI0919 13:43:00.232961       1 event.go:294] \"Event occurred\" object=\"cronjob-793/replace-27200983\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: replace-27200983--1-rztkn\"\nI0919 13:43:00.235704       1 job_controller.go:406] enqueueing job cronjob-793/replace-27200983\nI0919 13:43:00.236463       1 job_controller.go:406] enqueueing job cronjob-2639/forbid-27200983\nI0919 13:43:00.243302       1 job_controller.go:406] enqueueing job cronjob-2639/forbid-27200983\nI0919 13:43:00.257830       1 job_controller.go:406] enqueueing job cronjob-793/replace-27200983\nE0919 13:43:00.269310       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:00.295503       1 job_controller.go:406] enqueueing job cronjob-793/replace-27200983\nI0919 13:43:00.307556       1 job_controller.go:406] enqueueing job cronjob-793/replace-27200983\nI0919 13:43:00.417648       1 namespace_controller.go:185] Namespace has been deleted kubectl-5493\nI0919 13:43:00.598034       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1557\nI0919 13:43:00.750993       1 garbagecollector.go:475] \"Processing object\" object=\"job-2004/exceed-active-deadline--1-tw44v\" objectUID=4bb491c9-263a-4cf3-a156-462570452c2b kind=\"Pod\" virtual=false\nI0919 13:43:00.751033       1 garbagecollector.go:475] \"Processing object\" object=\"job-2004/exceed-active-deadline--1-snsz6\" objectUID=45a065e4-86fc-4e27-a993-7e505ac5223a kind=\"Pod\" virtual=false\nI0919 13:43:00.751048       1 job_controller.go:406] enqueueing job job-2004/exceed-active-deadline\nI0919 13:43:00.793476       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:43:00.793622       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0919 13:43:00.799423       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0919 13:43:00.816569       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0919 13:43:00.816598       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE0919 13:43:00.885225       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:01.382209       1 job_controller.go:406] enqueueing job cronjob-2639/forbid-27200983\nI0919 13:43:01.483819       1 stateful_set_control.go:555] StatefulSet statefulset-7255/ss2 terminating Pod ss2-0 for update\nI0919 13:43:01.496997       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-0 in StatefulSet ss2 successful\"\nI0919 13:43:01.642110       1 event.go:294] \"Event occurred\" object=\"volume-expand-8655/pod-2d93d042-d788-44ac-8381-2fc65a8d3ba1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\\\" \"\nI0919 13:43:01.642316       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0aa21741d44293999\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:01.995991       1 garbagecollector.go:475] \"Processing object\" object=\"services-1639/pause-pod-68b8b8c7bc\" objectUID=ee8119d9-402b-4fbf-acb2-72e54a7eb253 kind=\"ReplicaSet\" virtual=false\nI0919 13:43:01.996421       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"services-1639/pause-pod\"\nI0919 13:43:01.998234       1 garbagecollector.go:584] \"Deleting object\" object=\"services-1639/pause-pod-68b8b8c7bc\" objectUID=ee8119d9-402b-4fbf-acb2-72e54a7eb253 kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:43:02.001564       1 garbagecollector.go:475] \"Processing object\" object=\"services-1639/pause-pod-68b8b8c7bc-s5qdd\" objectUID=1470e93d-0129-4842-aa4b-d08f10377d52 kind=\"Pod\" virtual=false\nI0919 13:43:02.001827       1 garbagecollector.go:475] \"Processing object\" object=\"services-1639/pause-pod-68b8b8c7bc-z687k\" objectUID=f5855283-b5a8-43d6-90e6-46eebab3cb60 kind=\"Pod\" virtual=false\nI0919 13:43:02.004270       1 garbagecollector.go:584] \"Deleting object\" object=\"services-1639/pause-pod-68b8b8c7bc-s5qdd\" objectUID=1470e93d-0129-4842-aa4b-d08f10377d52 kind=\"Pod\" propagationPolicy=Background\nI0919 13:43:02.004599       1 garbagecollector.go:584] \"Deleting object\" object=\"services-1639/pause-pod-68b8b8c7bc-z687k\" objectUID=f5855283-b5a8-43d6-90e6-46eebab3cb60 kind=\"Pod\" propagationPolicy=Background\nI0919 13:43:02.017087       1 garbagecollector.go:475] \"Processing object\" object=\"services-1639/pause-pod-68b8b8c7bc-s5qdd\" objectUID=0674f7cb-d246-4132-bd09-a05bda82cfaa kind=\"CiliumEndpoint\" virtual=false\nI0919 13:43:02.019835       1 garbagecollector.go:475] \"Processing object\" object=\"services-1639/pause-pod-68b8b8c7bc-z687k\" objectUID=1eeb7bd9-d6be-4797-a535-e9356c01150b kind=\"CiliumEndpoint\" virtual=false\nI0919 13:43:02.024507       1 garbagecollector.go:584] \"Deleting object\" object=\"services-1639/pause-pod-68b8b8c7bc-s5qdd\" objectUID=0674f7cb-d246-4132-bd09-a05bda82cfaa kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:43:02.026355       1 garbagecollector.go:584] \"Deleting object\" object=\"services-1639/pause-pod-68b8b8c7bc-z687k\" objectUID=1eeb7bd9-d6be-4797-a535-e9356c01150b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:43:02.117765       1 garbagecollector.go:475] \"Processing object\" object=\"services-1639/echo-sourceip\" objectUID=b905e4ba-5984-4395-b9fa-562587119962 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:43:02.124673       1 garbagecollector.go:584] \"Deleting object\" object=\"services-1639/echo-sourceip\" objectUID=b905e4ba-5984-4395-b9fa-562587119962 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0919 13:43:02.147742       1 job_controller.go:406] enqueueing job cronjob-2639/forbid-27200983\nI0919 13:43:02.148265       1 garbagecollector.go:475] \"Processing object\" object=\"cronjob-2639/forbid-27200983--1-zxtb7\" objectUID=23e14ddd-37f0-414c-8643-f5967f7c0a47 kind=\"Pod\" virtual=false\nI0919 13:43:02.151124       1 event.go:294] \"Event occurred\" object=\"cronjob-2639/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"MissingJob\" message=\"Active job went missing: forbid-27200983\"\nI0919 13:43:02.152482       1 garbagecollector.go:584] \"Deleting object\" object=\"cronjob-2639/forbid-27200983--1-zxtb7\" objectUID=23e14ddd-37f0-414c-8643-f5967f7c0a47 kind=\"Pod\" propagationPolicy=Background\nE0919 13:43:02.169783       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:02.229404       1 garbagecollector.go:475] \"Processing object\" object=\"services-1639/sourceip-test-28xxq\" objectUID=2a48333f-d77f-425d-8772-b658272fc475 kind=\"EndpointSlice\" virtual=false\nI0919 13:43:02.235080       1 garbagecollector.go:584] \"Deleting object\" object=\"services-1639/sourceip-test-28xxq\" objectUID=2a48333f-d77f-425d-8772-b658272fc475 kind=\"EndpointSlice\" propagationPolicy=Background\nI0919 13:43:02.286789       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:02.602031       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0919 13:43:02.642508       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2725/pod-6801315b-1e0d-4f24-825b-139c89d6d287\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:02.643275       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:02.684548       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:02.836632       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-8762/pvc-mwvnw\"\nI0919 13:43:02.858899       1 pv_controller.go:640] volume \"local-qqvwc\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:02.863868       1 pv_controller.go:879] volume \"local-qqvwc\" entered phase \"Released\"\nE0919 13:43:02.874901       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:02.951784       1 pv_controller_base.go:521] deletion of claim \"volume-8762/pvc-mwvnw\" was already processed\nI0919 13:43:03.042771       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2725/pod-6801315b-1e0d-4f24-825b-139c89d6d287\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:03.045653       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:03.050159       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2725/pod-6801315b-1e0d-4f24-825b-139c89d6d287\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:03.050188       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nE0919 13:43:03.195728       1 tokens_controller.go:262] error synchronizing serviceaccount watch-5750/default: secrets \"default-token-djqjb\" is forbidden: unable to create new content in namespace watch-5750 because it is being terminated\nI0919 13:43:03.311321       1 namespace_controller.go:185] Namespace has been deleted hostpath-2462\nI0919 13:43:03.483334       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:03.484088       1 namespace_controller.go:185] Namespace has been deleted configmap-1500\nI0919 13:43:03.488586       1 event.go:294] \"Event occurred\" object=\"job-4069/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-dfg74\"\nI0919 13:43:03.489287       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:03.496906       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:03.500109       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:03.694060       1 job_controller.go:406] enqueueing job cronjob-793/replace-27200983\nI0919 13:43:03.817843       1 namespace_controller.go:185] Namespace has been deleted crd-watch-2397\nI0919 13:43:04.109788       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:04.139545       1 event.go:294] \"Event occurred\" object=\"job-4069/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-local--1-zlh2h\"\nI0919 13:43:04.145489       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:04.153676       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nE0919 13:43:04.213116       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:04.213582       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:04.371624       1 namespace_controller.go:185] Namespace has been deleted apf-5954\nI0919 13:43:04.428010       1 pv_controller.go:879] volume \"pvc-f801191a-e52b-4998-93b8-9f8547af6510\" entered phase \"Bound\"\nI0919 13:43:04.428184       1 pv_controller.go:982] volume \"pvc-f801191a-e52b-4998-93b8-9f8547af6510\" bound to claim \"statefulset-8088/datadir-ss-1\"\nI0919 13:43:04.439041       1 pv_controller.go:823] claim \"statefulset-8088/datadir-ss-1\" entered phase \"Bound\"\nI0919 13:43:04.844100       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-f801191a-e52b-4998-93b8-9f8547af6510\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05fb9ddfc2f8cdce8\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:04.846004       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2725/pod-6801315b-1e0d-4f24-825b-139c89d6d287\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:04.846034       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:04.850894       1 namespace_controller.go:185] Namespace has been deleted projected-4830\nE0919 13:43:05.001496       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:05.041611       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-2725/pod-6801315b-1e0d-4f24-825b-139c89d6d287\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:05.041640       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:05.045606       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-2725/pvc-vvdxz\"\nI0919 13:43:05.052774       1 pv_controller.go:640] volume \"local-pvvvvtc\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:05.056875       1 pv_controller.go:879] volume \"local-pvvvvtc\" entered phase \"Released\"\nI0919 13:43:05.060787       1 pv_controller_base.go:521] deletion of claim \"persistent-local-volumes-test-2725/pvc-vvdxz\" was already processed\nE0919 13:43:05.218851       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nE0919 13:43:05.279314       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:05.642187       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-5165\nI0919 13:43:06.029660       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6797-105\nI0919 13:43:06.029938       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7491/pvc-t44fp\"\nI0919 13:43:06.039254       1 pv_controller.go:640] volume \"local-5tgbh\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:06.045874       1 pv_controller.go:879] volume \"local-5tgbh\" entered phase \"Released\"\nI0919 13:43:06.151164       1 pv_controller_base.go:521] deletion of claim \"provisioning-7491/pvc-t44fp\" was already processed\nI0919 13:43:06.189306       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-8929/pvc-rmgct\"\nI0919 13:43:06.194465       1 pv_controller.go:640] volume \"local-rw9d4\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:06.198055       1 pv_controller.go:879] volume \"local-rw9d4\" entered phase \"Released\"\nE0919 13:43:06.221663       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:06.305440       1 pv_controller_base.go:521] deletion of claim \"provisioning-8929/pvc-rmgct\" was already processed\nI0919 13:43:06.483027       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nE0919 13:43:06.624464       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0919 13:43:06.756185       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-947/pvc-kx2xh: storageclass.storage.k8s.io \"provisioning-947\" not found\nI0919 13:43:06.756423       1 event.go:294] \"Event occurred\" object=\"provisioning-947/pvc-kx2xh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-947\\\" not found\"\nI0919 13:43:06.873589       1 pv_controller.go:879] volume \"local-ffgxt\" entered phase \"Available\"\nI0919 13:43:07.117918       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-f801191a-e52b-4998-93b8-9f8547af6510\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05fb9ddfc2f8cdce8\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:07.118112       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-f801191a-e52b-4998-93b8-9f8547af6510\\\" \"\nI0919 13:43:07.186016       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:07.251987       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5594-6408/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0919 13:43:07.481500       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5594-6408/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0919 13:43:07.492848       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:07.548054       1 pv_controller.go:930] claim \"provisioning-947/pvc-kx2xh\" bound to volume \"local-ffgxt\"\nI0919 13:43:07.558070       1 pv_controller.go:879] volume \"local-ffgxt\" entered phase \"Bound\"\nI0919 13:43:07.558262       1 pv_controller.go:982] volume \"local-ffgxt\" bound to claim \"provisioning-947/pvc-kx2xh\"\nI0919 13:43:07.571056       1 pv_controller.go:823] claim \"provisioning-947/pvc-kx2xh\" entered phase \"Bound\"\nE0919 13:43:07.575643       1 tokens_controller.go:262] error synchronizing serviceaccount services-1639/default: secrets \"default-token-9krmm\" is forbidden: unable to create new content in namespace services-1639 because it is being terminated\nI0919 13:43:07.901856       1 namespace_controller.go:185] Namespace has been deleted disruption-5415\nI0919 13:43:08.220880       1 namespace_controller.go:185] Namespace has been deleted watch-5750\nE0919 13:43:08.434983       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:09.408907       1 namespace_controller.go:185] Namespace has been deleted dns-autoscaling-7772\nI0919 13:43:09.580192       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:09.914779       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-3322-1831/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0919 13:43:10.152897       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-3322-1831/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0919 13:43:10.381636       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:10.387985       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:10.503355       1 pv_controller.go:879] volume \"local-pv4tm5v\" entered phase \"Available\"\nI0919 13:43:10.609754       1 pv_controller.go:930] claim \"persistent-local-volumes-test-3048/pvc-bjc5l\" bound to volume \"local-pv4tm5v\"\nI0919 13:43:10.627671       1 pv_controller.go:879] volume \"local-pv4tm5v\" entered phase \"Bound\"\nI0919 13:43:10.627760       1 pv_controller.go:982] volume \"local-pv4tm5v\" bound to claim \"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:10.638861       1 pv_controller.go:823] claim \"persistent-local-volumes-test-3048/pvc-bjc5l\" entered phase \"Bound\"\nI0919 13:43:10.640712       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2725\nI0919 13:43:10.794940       1 garbagecollector.go:475] \"Processing object\" object=\"statefulset-1762/test-ss-688b56f75c\" objectUID=3ba341e5-28e5-4a57-aad4-1d949b8960a3 kind=\"ControllerRevision\" virtual=false\nI0919 13:43:10.795147       1 stateful_set.go:440] StatefulSet has been deleted statefulset-1762/test-ss\nI0919 13:43:10.795212       1 garbagecollector.go:475] \"Processing object\" object=\"statefulset-1762/test-ss-0\" objectUID=0a8822e0-b33b-41f9-ba35-732a8e56ad0f kind=\"Pod\" virtual=false\nI0919 13:43:10.822888       1 garbagecollector.go:584] \"Deleting object\" object=\"statefulset-1762/test-ss-0\" objectUID=0a8822e0-b33b-41f9-ba35-732a8e56ad0f kind=\"Pod\" propagationPolicy=Background\nI0919 13:43:10.823916       1 garbagecollector.go:584] \"Deleting object\" object=\"statefulset-1762/test-ss-688b56f75c\" objectUID=3ba341e5-28e5-4a57-aad4-1d949b8960a3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:43:11.524966       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-597/test-quota\nE0919 13:43:11.549094       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0919 13:43:11.650095       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-597/default: secrets \"default-token-f9pcv\" is forbidden: unable to create new content in namespace resourcequota-597 because it is being terminated\nI0919 13:43:11.967347       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:11.969607       1 event.go:294] \"Event occurred\" object=\"job-4069/fail-once-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"Completed\" message=\"Job completed\"\nI0919 13:43:12.109493       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:12.390968       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nE0919 13:43:12.436575       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:12.541689       1 event.go:294] \"Event occurred\" object=\"provisioning-1455-732/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0919 13:43:12.598817       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-6558-6904/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0919 13:43:12.707592       1 namespace_controller.go:185] Namespace has been deleted services-1639\nI0919 13:43:12.712091       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-6558-6904/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0919 13:43:12.833475       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-3322/pvc-f7z9h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-3322\\\" or manually created by system administrator\"\nI0919 13:43:12.903820       1 pv_controller.go:879] volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" entered phase \"Bound\"\nI0919 13:43:12.904530       1 pv_controller.go:982] volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" bound to claim \"csi-mock-volumes-3322/pvc-f7z9h\"\nI0919 13:43:12.957351       1 pv_controller.go:823] claim \"csi-mock-volumes-3322/pvc-f7z9h\" entered phase \"Bound\"\nI0919 13:43:12.985440       1 event.go:294] \"Event occurred\" object=\"provisioning-1455/csi-hostpathclx4x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-1455\\\" or manually created by system administrator\"\nI0919 13:43:12.987316       1 event.go:294] \"Event occurred\" object=\"provisioning-1455/csi-hostpathclx4x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-1455\\\" or manually created by system administrator\"\nE0919 13:43:13.076996       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-8929/default: secrets \"default-token-kth6b\" is forbidden: unable to create new content in namespace provisioning-8929 because it is being terminated\nI0919 13:43:13.214108       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5594/pvc-kdl5k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5594\\\" or manually created by system administrator\"\nI0919 13:43:13.239320       1 pv_controller.go:879] volume \"pvc-21fdca84-c2ec-4703-9454-9124a4b07951\" entered phase \"Bound\"\nI0919 13:43:13.239369       1 pv_controller.go:982] volume \"pvc-21fdca84-c2ec-4703-9454-9124a4b07951\" bound to claim \"csi-mock-volumes-5594/pvc-kdl5k\"\nI0919 13:43:13.250706       1 pv_controller.go:823] claim \"csi-mock-volumes-5594/pvc-kdl5k\" entered phase \"Bound\"\nI0919 13:43:13.449643       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-9759/service-proxy-disabled\" need=3 creating=3\nI0919 13:43:13.457728       1 event.go:294] \"Event occurred\" object=\"services-9759/service-proxy-disabled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-disabled-5qxfn\"\nI0919 13:43:13.467080       1 event.go:294] \"Event occurred\" object=\"services-9759/service-proxy-disabled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-disabled-ztqc7\"\nI0919 13:43:13.470133       1 event.go:294] \"Event occurred\" object=\"services-9759/service-proxy-disabled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-disabled-fvzrg\"\nI0919 13:43:13.504975       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-5030/pvc-ckxtc\"\nI0919 13:43:13.516280       1 pv_controller.go:640] volume \"aws-mgqr9\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:13.522466       1 pv_controller.go:879] volume \"aws-mgqr9\" entered phase \"Released\"\nI0919 13:43:13.692525       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-21fdca84-c2ec-4703-9454-9124a4b07951\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5594^4\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nE0919 13:43:14.167066       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-9864/default: secrets \"default-token-spz62\" is forbidden: unable to create new content in namespace kubectl-9864 because it is being terminated\nI0919 13:43:14.229815       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-21fdca84-c2ec-4703-9454-9124a4b07951\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5594^4\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:14.230246       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-5594/pvc-volume-tester-467bq\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-21fdca84-c2ec-4703-9454-9124a4b07951\\\" \"\nE0919 13:43:14.360380       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:14.448848       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9446-8317/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0919 13:43:14.556405       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9446-8317/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0919 13:43:14.670282       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9446-8317/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0919 13:43:15.303421       1 namespace_controller.go:185] Namespace has been deleted sctp-7855\nI0919 13:43:15.415306       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3322^4\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:15.683979       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:15.951038       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3322^4\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:15.951267       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-3322/pvc-volume-tester-ldnpm\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\\\" \"\nI0919 13:43:16.124581       1 pv_controller.go:879] volume \"pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4\" entered phase \"Bound\"\nI0919 13:43:16.124654       1 pv_controller.go:982] volume \"pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4\" bound to claim \"provisioning-1455/csi-hostpathclx4x\"\nI0919 13:43:16.133445       1 pv_controller.go:823] claim \"provisioning-1455/csi-hostpathclx4x\" entered phase \"Bound\"\nE0919 13:43:16.232932       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-4559/default: secrets \"default-token-2wg9s\" is forbidden: unable to create new content in namespace secrets-4559 because it is being terminated\nE0919 13:43:16.430867       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:16.672656       1 event.go:294] \"Event occurred\" object=\"volume-expand-3640/awsw7gvj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalExpanding\" message=\"CSI migration enabled for kubernetes.io/aws-ebs; waiting for external resizer to expand the pvc\"\nI0919 13:43:17.310343       1 event.go:294] \"Event occurred\" object=\"provisioning-9133-4391/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nE0919 13:43:17.531306       1 tokens_controller.go:262] error synchronizing serviceaccount security-context-test-1178/default: secrets \"default-token-kxx6w\" is forbidden: unable to create new content in namespace security-context-test-1178 because it is being terminated\nI0919 13:43:17.641869       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1455^8aaa41d3-194f-11ec-bf0b-8e632a6451a9\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:43:17.854897       1 event.go:294] \"Event occurred\" object=\"provisioning-9133/pvc-cb2mq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9133\\\" or manually created by system administrator\"\nI0919 13:43:17.855499       1 event.go:294] \"Event occurred\" object=\"provisioning-9133/pvc-cb2mq\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9133\\\" or manually created by system administrator\"\nE0919 13:43:17.968086       1 tokens_controller.go:262] error synchronizing serviceaccount job-4069/default: secrets \"default-token-t9pzw\" is forbidden: unable to create new content in namespace job-4069 because it is being terminated\nI0919 13:43:18.057439       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.061716       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.065313       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.067424       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.070533       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.073643       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.075450       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.077372       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.090326       1 job_controller.go:406] enqueueing job job-4069/fail-once-local\nI0919 13:43:18.168123       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-1455^8aaa41d3-194f-11ec-bf0b-8e632a6451a9\") from node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:43:18.168315       1 event.go:294] \"Event occurred\" object=\"provisioning-1455/pod-subpath-test-dynamicpv-wtrk\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4\\\" \"\nI0919 13:43:18.232412       1 namespace_controller.go:185] Namespace has been deleted resourcequota-597\nI0919 13:43:18.237308       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-6558/pvc-9gb54\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6558\\\" or manually created by system administrator\"\nI0919 13:43:18.243894       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-6558/pvc-9gb54\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-6558\\\" or manually created by system administrator\"\nI0919 13:43:18.282272       1 pv_controller.go:879] volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" entered phase \"Bound\"\nI0919 13:43:18.282316       1 pv_controller.go:982] volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" bound to claim \"csi-mock-volumes-6558/pvc-9gb54\"\nI0919 13:43:18.284061       1 namespace_controller.go:185] Namespace has been deleted apparmor-7872\nI0919 13:43:18.295921       1 pv_controller.go:823] claim \"csi-mock-volumes-6558/pvc-9gb54\" entered phase \"Bound\"\nI0919 13:43:18.410914       1 namespace_controller.go:185] Namespace has been deleted volume-8762\nI0919 13:43:18.480865       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-6378\nI0919 13:43:18.534165       1 namespace_controller.go:185] Namespace has been deleted endpointslice-4362\nI0919 13:43:18.541324       1 namespace_controller.go:185] Namespace has been deleted provisioning-8929\nI0919 13:43:18.783156       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6558^4\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:19.004840       1 namespace_controller.go:185] Namespace has been deleted provisioning-7491\nI0919 13:43:19.249424       1 namespace_controller.go:185] Namespace has been deleted kubectl-9864\nI0919 13:43:19.332939       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6558^4\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:19.333423       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-6558/pvc-volume-tester-w4sbs\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-01010954-b671-4d71-b44e-3abd933cb03a\\\" \"\nI0919 13:43:19.818793       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-mgqr9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-080983315c6e662b5\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:43:19.829191       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-025c3a1c373704517\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:43:19.829443       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-mgqr9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-080983315c6e662b5\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:43:19.832363       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-025c3a1c373704517\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:43:20.510414       1 namespace_controller.go:185] Namespace has been deleted kubectl-4307\nI0919 13:43:20.612389       1 pv_controller.go:879] volume \"pvc-3a6eecff-00cf-46be-b0f9-1a9fc224c375\" entered phase \"Bound\"\nI0919 13:43:20.612495       1 pv_controller.go:982] volume \"pvc-3a6eecff-00cf-46be-b0f9-1a9fc224c375\" bound to claim \"provisioning-9133/pvc-cb2mq\"\nI0919 13:43:20.624577       1 pv_controller.go:823] claim \"provisioning-9133/pvc-cb2mq\" entered phase \"Bound\"\nI0919 13:43:21.133259       1 pv_controller.go:879] volume \"local-pvxk62l\" entered phase \"Available\"\nI0919 13:43:21.175785       1 namespace_controller.go:185] Namespace has been deleted statefulset-1762\nI0919 13:43:21.241102       1 pv_controller.go:930] claim \"persistent-local-volumes-test-5969/pvc-66k8f\" bound to volume \"local-pvxk62l\"\nI0919 13:43:21.257962       1 pv_controller.go:879] volume \"local-pvxk62l\" entered phase \"Bound\"\nI0919 13:43:21.258184       1 pv_controller.go:982] volume \"local-pvxk62l\" bound to claim \"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:21.273123       1 pv_controller.go:823] claim \"persistent-local-volumes-test-5969/pvc-66k8f\" entered phase \"Bound\"\nI0919 13:43:21.372811       1 namespace_controller.go:185] Namespace has been deleted secrets-4559\nI0919 13:43:21.576799       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-2571/sample-webhook-deployment-8f89dbb55\" need=1 creating=1\nI0919 13:43:21.577851       1 event.go:294] \"Event occurred\" object=\"webhook-2571/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-8f89dbb55 to 1\"\nI0919 13:43:21.585197       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-2571/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:43:21.594131       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:21.594524       1 event.go:294] \"Event occurred\" object=\"webhook-2571/sample-webhook-deployment-8f89dbb55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-8f89dbb55-9sgpr\"\nI0919 13:43:21.744249       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3a6eecff-00cf-46be-b0f9-1a9fc224c375\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9133^8d50a28b-194f-11ec-9f0e-faedea3272e2\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:21.752195       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-947/pvc-kx2xh\"\nI0919 13:43:21.763220       1 pv_controller.go:640] volume \"local-ffgxt\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:21.767117       1 pv_controller.go:879] volume \"local-ffgxt\" entered phase \"Released\"\nI0919 13:43:21.814278       1 namespace_controller.go:185] Namespace has been deleted webhook-6232\nI0919 13:43:21.864998       1 pv_controller_base.go:521] deletion of claim \"provisioning-947/pvc-kx2xh\" was already processed\nI0919 13:43:22.297322       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-3a6eecff-00cf-46be-b0f9-1a9fc224c375\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9133^8d50a28b-194f-11ec-9f0e-faedea3272e2\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:22.297426       1 event.go:294] \"Event occurred\" object=\"provisioning-9133/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3a6eecff-00cf-46be-b0f9-1a9fc224c375\\\" \"\nI0919 13:43:22.679168       1 namespace_controller.go:185] Namespace has been deleted security-context-test-1178\nI0919 13:43:22.740764       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3048/pod-7fc1963b-73a0-4d9d-ae30-97dbf6f60536\" PVC=\"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:22.740793       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:22.937060       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-9759/service-proxy-toggled\" need=3 creating=3\nI0919 13:43:22.944675       1 event.go:294] \"Event occurred\" object=\"services-9759/service-proxy-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-toggled-4m2fp\"\nI0919 13:43:22.951266       1 event.go:294] \"Event occurred\" object=\"services-9759/service-proxy-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-toggled-9bcbk\"\nI0919 13:43:22.954994       1 event.go:294] \"Event occurred\" object=\"services-9759/service-proxy-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-proxy-toggled-87xhs\"\nI0919 13:43:23.155297       1 namespace_controller.go:185] Namespace has been deleted job-4069\nE0919 13:43:23.296349       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:23.296880       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:43:23.297480       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\"\nI0919 13:43:23.303038       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0919 13:43:23.321629       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0919 13:43:23.812035       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-5025-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-7312-crds stable.example.com/v2, Resource=e2e-test-crd-webhook-6863-crds]\nI0919 13:43:23.812214       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0919 13:43:23.812249       1 shared_informer.go:247] Caches are synced for resource quota \nI0919 13:43:23.812258       1 resource_quota_controller.go:454] synced quota controller\nI0919 13:43:24.061957       1 garbagecollector.go:217] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-5025-crds crd-publish-openapi-test-common-group.example.com/v6, Resource=e2e-test-crd-publish-openapi-7312-crds stable.example.com/v2, Resource=e2e-test-crd-webhook-6863-crds]\nI0919 13:43:24.062142       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0919 13:43:24.062243       1 shared_informer.go:247] Caches are synced for garbage collector \nI0919 13:43:24.062261       1 garbagecollector.go:258] synced garbage collector\nI0919 13:43:24.736468       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3048/pod-7fc1963b-73a0-4d9d-ae30-97dbf6f60536\" PVC=\"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:24.736493       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:24.836040       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9446/pvc-rbvn7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9446\\\" or manually created by system administrator\"\nI0919 13:43:24.857156       1 pv_controller.go:879] volume \"pvc-68fa203a-65f2-4047-b072-63f972c72300\" entered phase \"Bound\"\nI0919 13:43:24.857199       1 pv_controller.go:982] volume \"pvc-68fa203a-65f2-4047-b072-63f972c72300\" bound to claim \"csi-mock-volumes-9446/pvc-rbvn7\"\nI0919 13:43:24.865230       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-2992/pvc-45x7c\"\nI0919 13:43:24.868133       1 pv_controller.go:823] claim \"csi-mock-volumes-9446/pvc-rbvn7\" entered phase \"Bound\"\nI0919 13:43:24.874157       1 pv_controller.go:640] volume \"pvc-858517c7-c76a-4b36-9c25-b00d377ad9ef\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:43:24.876825       1 pv_controller.go:879] volume \"pvc-858517c7-c76a-4b36-9c25-b00d377ad9ef\" entered phase \"Released\"\nI0919 13:43:24.881213       1 pv_controller.go:1340] isVolumeReleased[pvc-858517c7-c76a-4b36-9c25-b00d377ad9ef]: volume is released\nI0919 13:43:24.889524       1 pv_controller_base.go:521] deletion of claim \"csi-mock-volumes-2992/pvc-45x7c\" was already processed\nI0919 13:43:25.135779       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-3048/pod-7fc1963b-73a0-4d9d-ae30-97dbf6f60536\" PVC=\"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:25.135964       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:25.145768       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-3048/pvc-bjc5l\"\nI0919 13:43:25.175392       1 pv_controller.go:640] volume \"local-pv4tm5v\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:25.183034       1 pv_controller.go:879] volume \"local-pv4tm5v\" entered phase \"Released\"\nI0919 13:43:25.192082       1 pv_controller_base.go:521] deletion of claim \"persistent-local-volumes-test-3048/pvc-bjc5l\" was already processed\nI0919 13:43:25.381430       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-68fa203a-65f2-4047-b072-63f972c72300\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9446^4\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:25.431077       1 namespace_controller.go:185] Namespace has been deleted provisioning-9037\nE0919 13:43:25.807253       1 namespace_controller.go:162] deletion of namespace apply-14 failed: unexpected items still remain in namespace: apply-14 for gvr: /v1, Resource=pods\nI0919 13:43:25.924792       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-68fa203a-65f2-4047-b072-63f972c72300\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-9446^4\") from node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:25.924991       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-9446/pvc-volume-tester-m9qcl\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-68fa203a-65f2-4047-b072-63f972c72300\\\" \"\nI0919 13:43:26.523603       1 pv_controller_base.go:521] deletion of claim \"volume-5030/pvc-ckxtc\" was already processed\nI0919 13:43:26.668008       1 pv_controller.go:879] volume \"pvc-fa651547-61ce-437d-8369-fe83fa7a76d2\" entered phase \"Bound\"\nI0919 13:43:26.668055       1 pv_controller.go:982] volume \"pvc-fa651547-61ce-437d-8369-fe83fa7a76d2\" bound to claim \"statefulset-8088/datadir-ss-2\"\nI0919 13:43:26.677332       1 pv_controller.go:823] claim \"statefulset-8088/datadir-ss-2\" entered phase \"Bound\"\nI0919 13:43:26.922504       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1455/csi-hostpathclx4x\"\nI0919 13:43:26.929305       1 pv_controller.go:640] volume \"pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:43:26.935778       1 pv_controller.go:879] volume \"pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4\" entered phase \"Released\"\nI0919 13:43:26.943960       1 pv_controller.go:1340] isVolumeReleased[pvc-c4c3d503-c7d5-4572-a967-ba708a4d6ee4]: volume is released\nI0919 13:43:26.961326       1 pv_controller_base.go:521] deletion of claim \"provisioning-1455/csi-hostpathclx4x\" was already processed\nE0919 13:43:27.014057       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0919 13:43:27.225455       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nI0919 13:43:27.321317       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-mgqr9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-080983315c6e662b5\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nE0919 13:43:27.349590       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nI0919 13:43:27.409994       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-fa651547-61ce-437d-8369-fe83fa7a76d2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a26c6ea6999908a1\") from node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nE0919 13:43:27.514893       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nE0919 13:43:27.639076       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nE0919 13:43:27.787822       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nE0919 13:43:28.002924       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nE0919 13:43:28.404445       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nE0919 13:43:28.844315       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nE0919 13:43:29.112099       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-3048/default: secrets \"default-token-nkqn8\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-3048 because it is being terminated\nI0919 13:43:29.653245       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-fa651547-61ce-437d-8369-fe83fa7a76d2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0a26c6ea6999908a1\") from node \"ip-172-20-48-58.eu-central-1.compute.internal\" \nI0919 13:43:29.653670       1 event.go:294] \"Event occurred\" object=\"statefulset-8088/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-fa651547-61ce-437d-8369-fe83fa7a76d2\\\" \"\nE0919 13:43:29.828615       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nE0919 13:43:30.137133       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:30.317094       1 namespace_controller.go:185] Namespace has been deleted subpath-8076\nE0919 13:43:30.495117       1 tokens_controller.go:262] error synchronizing serviceaccount volume-5030/default: secrets \"default-token-ppw7c\" is forbidden: unable to create new content in namespace volume-5030 because it is being terminated\nI0919 13:43:30.801410       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0aa21741d44293999\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:30.872584       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0aa21741d44293999\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:31.030118       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3064/sample-webhook-deployment-8f89dbb55\" need=1 creating=1\nI0919 13:43:31.032846       1 event.go:294] \"Event occurred\" object=\"webhook-3064/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-8f89dbb55 to 1\"\nI0919 13:43:31.053371       1 event.go:294] \"Event occurred\" object=\"webhook-3064/sample-webhook-deployment-8f89dbb55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-8f89dbb55-qpfw7\"\nI0919 13:43:31.055157       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3064/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:43:31.093409       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3064/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0919 13:43:31.258003       1 namespace_controller.go:162] deletion of namespace kubelet-test-7512 failed: unexpected items still remain in namespace: kubelet-test-7512 for gvr: /v1, Resource=pods\nI0919 13:43:31.478066       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5969/pod-c124b391-2a95-4c50-a70c-98a136ffb652\" PVC=\"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:31.478244       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:32.292979       1 event.go:294] \"Event occurred\" object=\"volume-expand-8655/aws7jjr9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalExpanding\" message=\"CSI migration enabled for kubernetes.io/aws-ebs; waiting for external resizer to expand the pvc\"\nE0919 13:43:32.296451       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-1455/default: secrets \"default-token-8mz4r\" is forbidden: unable to create new content in namespace provisioning-1455 because it is being terminated\nI0919 13:43:32.426586       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-3322/pvc-f7z9h\"\nI0919 13:43:32.441342       1 pv_controller.go:640] volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:43:32.444853       1 pv_controller.go:879] volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" entered phase \"Released\"\nI0919 13:43:32.453559       1 pv_controller.go:1340] isVolumeReleased[pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c]: volume is released\nI0919 13:43:32.659495       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5969/pod-c124b391-2a95-4c50-a70c-98a136ffb652\" PVC=\"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:32.659523       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:32.663694       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-5969/pod-c124b391-2a95-4c50-a70c-98a136ffb652\" PVC=\"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:32.663894       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:32.671413       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-5969/pvc-66k8f\"\nI0919 13:43:32.682174       1 pv_controller.go:640] volume \"local-pvxk62l\" is released and reclaim policy \"Retain\" will be executed\nI0919 13:43:32.688064       1 pv_controller.go:879] volume \"local-pvxk62l\" entered phase \"Released\"\nI0919 13:43:32.698491       1 pv_controller_base.go:521] deletion of claim \"persistent-local-volumes-test-5969/pvc-66k8f\" was already processed\nI0919 13:43:32.927492       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3322^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:32.932330       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3322^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:33.124718       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-2571/e2e-test-webhook-w7m2s\" objectUID=41ef840d-5d5e-46d9-8efa-60afd297cbca kind=\"EndpointSlice\" virtual=false\nI0919 13:43:33.150505       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-2571/e2e-test-webhook-w7m2s\" objectUID=41ef840d-5d5e-46d9-8efa-60afd297cbca kind=\"EndpointSlice\" propagationPolicy=Background\nI0919 13:43:33.257149       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-2571/sample-webhook-deployment-8f89dbb55\" objectUID=39c8c97a-4829-41f6-a720-879be5249d57 kind=\"ReplicaSet\" virtual=false\nI0919 13:43:33.257397       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-2571/sample-webhook-deployment\"\nI0919 13:43:33.260027       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-2571/sample-webhook-deployment-8f89dbb55\" objectUID=39c8c97a-4829-41f6-a720-879be5249d57 kind=\"ReplicaSet\" propagationPolicy=Background\nI0919 13:43:33.262470       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-2571/sample-webhook-deployment-8f89dbb55-9sgpr\" objectUID=5e798f0c-0691-4fa4-9049-06e498014988 kind=\"Pod\" virtual=false\nI0919 13:43:33.264140       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-2571/sample-webhook-deployment-8f89dbb55-9sgpr\" objectUID=5e798f0c-0691-4fa4-9049-06e498014988 kind=\"Pod\" propagationPolicy=Background\nI0919 13:43:33.274163       1 garbagecollector.go:475] \"Processing object\" object=\"webhook-2571/sample-webhook-deployment-8f89dbb55-9sgpr\" objectUID=af76a481-2260-4795-9aae-f03f18c0ebc1 kind=\"CiliumEndpoint\" virtual=false\nI0919 13:43:33.285481       1 garbagecollector.go:584] \"Deleting object\" object=\"webhook-2571/sample-webhook-deployment-8f89dbb55-9sgpr\" objectUID=af76a481-2260-4795-9aae-f03f18c0ebc1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0919 13:43:33.291777       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0919 13:43:33.310309       1 tokens_controller.go:262] error synchronizing serviceaccount ssh-5220/default: secrets \"default-token-l26wg\" is forbidden: unable to create new content in namespace ssh-5220 because it is being terminated\nI0919 13:43:33.326574       1 namespace_controller.go:185] Namespace has been deleted downward-api-3346\nI0919 13:43:33.377423       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-025c3a1c373704517\") on node \"ip-172-20-62-71.eu-central-1.compute.internal\" \nI0919 13:43:33.387821       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6558^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:33.387888       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-025c3a1c373704517\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:33.432772       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6558^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:33.445108       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-cc9796da-59b6-43a9-82c2-b35aa3b91d3c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-3322^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:33.631921       1 pv_controller_base.go:521] deletion of claim \"csi-mock-volumes-3322/pvc-f7z9h\" was already processed\nI0919 13:43:33.860937       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-6558/pvc-9gb54\"\nI0919 13:43:33.871942       1 pv_controller.go:640] volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" is released and reclaim policy \"Delete\" will be executed\nI0919 13:43:33.879921       1 pv_controller.go:879] volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" entered phase \"Released\"\nI0919 13:43:33.889066       1 pv_controller.go:1340] isVolumeReleased[pvc-01010954-b671-4d71-b44e-3abd933cb03a]: volume is released\nI0919 13:43:33.921089       1 pv_controller_base.go:521] deletion of claim \"csi-mock-volumes-6558/pvc-9gb54\" was already processed\nI0919 13:43:33.952938       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-01010954-b671-4d71-b44e-3abd933cb03a\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6558^4\") on node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:34.159935       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3048\nE0919 13:43:34.176413       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:34.458837       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-1072/aws5hk5l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0919 13:43:34.772711       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-1072/aws5hk5l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0919 13:43:34.797139       1 garbagecollector.go:475] \"Processing object\" object=\"csi-mock-volumes-2992-6340/csi-mockplugin-76787f7659\" objectUID=16dea5db-babc-4aad-93ec-cb7d497b41d2 kind=\"ControllerRevision\" virtual=false\nI0919 13:43:34.797403       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-2992-6340/csi-mockplugin\nI0919 13:43:34.797459       1 garbagecollector.go:475] \"Processing object\" object=\"csi-mock-volumes-2992-6340/csi-mockplugin-0\" objectUID=83ace0c6-1811-4016-9268-64a9a50174d7 kind=\"Pod\" virtual=false\nI0919 13:43:34.808961       1 garbagecollector.go:584] \"Deleting object\" object=\"csi-mock-volumes-2992-6340/csi-mockplugin-0\" objectUID=83ace0c6-1811-4016-9268-64a9a50174d7 kind=\"Pod\" propagationPolicy=Background\nI0919 13:43:34.809200       1 garbagecollector.go:584] \"Deleting object\" object=\"csi-mock-volumes-2992-6340/csi-mockplugin-76787f7659\" objectUID=16dea5db-babc-4aad-93ec-cb7d497b41d2 kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:43:34.848990       1 namespace_controller.go:185] Namespace has been deleted provisioning-947\nI0919 13:43:35.464384       1 event.go:294] \"Event occurred\" object=\"webhook-3912/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-8f89dbb55 to 1\"\nI0919 13:43:35.475852       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3912/sample-webhook-deployment-8f89dbb55\" need=1 creating=1\nI0919 13:43:35.478596       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3912/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0919 13:43:35.495081       1 event.go:294] \"Event occurred\" object=\"webhook-3912/sample-webhook-deployment-8f89dbb55\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-8f89dbb55-8gr5q\"\nI0919 13:43:35.578174       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-2992\nI0919 13:43:35.721019       1 namespace_controller.go:185] Namespace has been deleted volume-5030\nI0919 13:43:36.331507       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-025c3a1c373704517\") from node \"ip-172-20-50-204.eu-central-1.compute.internal\" \nI0919 13:43:36.331937       1 event.go:294] \"Event occurred\" object=\"volume-expand-3640/pod-44c9d0ea-2978-436f-8b0b-cfe161246028\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-cbdccd7b-af00-4836-bcbe-5d6b01302ce0\\\" \"\nI0919 13:43:36.410411       1 graph_builder.go:587] add [v1/Pod, namespace: csi-mock-volumes-5594, name: inline-volume-gwfnf, uid: ae38b823-c786-4672-9dbf-69a88e76c633] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0919 13:43:36.410594       1 garbagecollector.go:475] \"Processing object\" object=\"csi-mock-volumes-5594/inline-volume-gwfnf\" objectUID=ae38b823-c786-4672-9dbf-69a88e76c633 kind=\"Pod\" virtual=false\nI0919 13:43:36.412703       1 garbagecollector.go:594] remove DeleteDependents finalizer for item [v1/Pod, namespace: csi-mock-volumes-5594, name: inline-volume-gwfnf, uid: ae38b823-c786-4672-9dbf-69a88e76c633]\nE0919 13:43:36.583638       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:37.508835       1 namespace_controller.go:185] Namespace has been deleted provisioning-1455\nI0919 13:43:37.549076       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-1072/aws5hk5l\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0919 13:43:37.794760       1 garbagecollector.go:475] \"Processing object\" object=\"provisioning-1455-732/csi-hostpathplugin-68bdd746\" objectUID=9c4ba46e-3531-4378-b2d6-571d91bd878e kind=\"ControllerRevision\" virtual=false\nI0919 13:43:37.794951       1 stateful_set.go:440] StatefulSet has been deleted provisioning-1455-732/csi-hostpathplugin\nI0919 13:43:37.794978       1 garbagecollector.go:475] \"Processing object\" object=\"provisioning-1455-732/csi-hostpathplugin-0\" objectUID=dc1ebef1-7703-48bc-b539-66fadb72aabe kind=\"Pod\" virtual=false\nI0919 13:43:37.797379       1 garbagecollector.go:584] \"Deleting object\" object=\"provisioning-1455-732/csi-hostpathplugin-68bdd746\" objectUID=9c4ba46e-3531-4378-b2d6-571d91bd878e kind=\"ControllerRevision\" propagationPolicy=Background\nI0919 13:43:37.797799       1 garbagecollector.go:584] \"Deleting object\" object=\"provisioning-1455-732/csi-hostpathplugin-0\" objectUID=dc1ebef1-7703-48bc-b539-66fadb72aabe kind=\"Pod\" propagationPolicy=Background\nI0919 13:43:37.885177       1 stateful_set_control.go:555] StatefulSet statefulset-7255/ss2 terminating Pod ss2-2 for update\nI0919 13:43:37.895694       1 event.go:294] \"Event occurred\" object=\"statefulset-7255/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nE0919 13:43:37.950651       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-2571/default: secrets \"default-token-l7fzw\" is forbidden: unable to create new content in namespace webhook-2571 because it is being terminated\nE0919 13:43:38.101097       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-2571-markers/default: secrets \"default-token-7fqsl\" is forbidden: unable to create new content in namespace webhook-2571-markers because it is being terminated\nE0919 13:43:38.151045       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0919 13:43:38.287005       1 pv_controller.go:879] volume \"local-pvdt2dp\" entered phase \"Available\"\nI0919 13:43:38.305611       1 pv_controller.go:879] volume \"pvc-4b43cc8f-50d7-43c4-98d0-b650ec2b3584\" entered phase \"Bound\"\nI0919 13:43:38.305807       1 pv_controller.go:982] volume \"pvc-4b43cc8f-50d7-43c4-98d0-b650ec2b3584\" bound to claim \"fsgroupchangepolicy-1072/aws5hk5l\"\nI0919 13:43:38.314878       1 pv_controller.go:823] claim \"fsgroupchangepolicy-1072/aws5hk5l\" entered phase \"Bound\"\nI0919 13:43:38.386453       1 pv_controller.go:930] claim \"persistent-local-volumes-test-3815/pvc-xs2fx\" bound to volume \"local-pvdt2dp\"\nI0919 13:43:38.399466       1 pv_controller.go:879] volume \"local-pvdt2dp\" entered phase \"Bound\"\nI0919 13:43:38.399505       1 pv_controller.go:982] volume \"local-pvdt2dp\" bound to claim \"persistent-local-volumes-test-3815/pvc-xs2fx\"\nI0919 13:43:38.410583       1 pv_controller.go:823] claim \"persistent-local-volumes-test-3815/pvc-xs2fx\" entered phase \"Bound\"\nI0919 13:43:38.421524       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-73c287ee-a0e3-4556-8634-fc231ea4f147\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0aa21741d44293999\") on node \"ip-172-20-55-38.eu-central-1.compute.internal\" \nI0919 13:43:38.754373       1 reconciler.go:295] attach