This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-12 13:26
Elapsed32m19s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 124 lines ...
I0912 13:27:02.227942    4070 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0912 13:27:02.229696    4070 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.23.0-alpha.2+v1.23.0-alpha.1-125-g5324d3d66a/linux/amd64/kops
I0912 13:27:02.985742    4070 up.go:43] Cleaning up any leaked resources from previous cluster
I0912 13:27:02.985773    4070 dumplogs.go:38] /logs/artifacts/f00be6fb-13cc-11ec-a039-aeaa48941d38/kops toolbox dump --name e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0912 13:27:03.008906    4091 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0912 13:27:03.009036    4091 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io" not found
W0912 13:27:03.605981    4070 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0912 13:27:03.606042    4070 down.go:48] /logs/artifacts/f00be6fb-13cc-11ec-a039-aeaa48941d38/kops delete cluster --name e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --yes
I0912 13:27:03.621809    4101 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0912 13:27:03.622417    4101 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io" not found
I0912 13:27:04.147139    4070 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/09/12 13:27:04 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0912 13:27:04.154850    4070 http.go:37] curl https://ip.jsb.workers.dev
I0912 13:27:04.242905    4070 up.go:144] /logs/artifacts/f00be6fb-13cc-11ec-a039-aeaa48941d38/kops create cluster --name e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.23.0-alpha.1 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-arm64-server-20210907 --channel=alpha --networking=cilium --container-runtime=containerd --zones=eu-central-1a --node-size=m6g.large --master-size=m6g.large --admin-access 35.192.48.56/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48
I0912 13:27:04.261727    4113 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0912 13:27:04.262272    4113 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
I0912 13:27:04.287684    4113 create_cluster.go:827] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0912 13:27:04.841474    4113 new_cluster.go:1052]  Cloud Provider ID = aws
... skipping 31 lines ...

I0912 13:27:29.893093    4070 up.go:181] /logs/artifacts/f00be6fb-13cc-11ec-a039-aeaa48941d38/kops validate cluster --name e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0912 13:27:29.909567    4134 featureflag.go:166] FeatureFlag "SpecOverrideFlag"=true
I0912 13:27:29.909998    4134 featureflag.go:166] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io

W0912 13:27:31.294877    4134 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:27:41.325649    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:27:51.369822    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:28:01.402405    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:28:11.473387    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:28:21.509213    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:28:31.634060    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:28:41.663751    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:28:51.697301    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:29:01.725317    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:29:11.759152    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:29:21.881622    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:29:31.913642    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:29:41.960196    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:29:51.995191    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:30:02.024754    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:30:12.127741    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:30:22.168109    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:30:32.212142    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:30:42.241139    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:30:52.270698    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:31:02.298312    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0912 13:31:12.369201    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 10 lines ...
Pod	kube-system/cilium-kd6j9			system-node-critical pod "cilium-kd6j9" is not ready (cilium-agent)
Pod	kube-system/coredns-5dc785954d-wdwbc		system-cluster-critical pod "coredns-5dc785954d-wdwbc" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-6nr7n	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-6nr7n" is pending
Pod	kube-system/ebs-csi-controller-77f4d67f86-n95d5	system-cluster-critical pod "ebs-csi-controller-77f4d67f86-n95d5" is pending
Pod	kube-system/ebs-csi-node-x597v			system-node-critical pod "ebs-csi-node-x597v" is pending

Validation Failed
W0912 13:31:25.418241    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 13 lines ...
Pod	kube-system/coredns-5dc785954d-wdwbc		system-cluster-critical pod "coredns-5dc785954d-wdwbc" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-6nr7n	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-6nr7n" is pending
Pod	kube-system/ebs-csi-controller-77f4d67f86-n95d5	system-cluster-critical pod "ebs-csi-controller-77f4d67f86-n95d5" is pending
Pod	kube-system/ebs-csi-node-cxcnx			system-node-critical pod "ebs-csi-node-cxcnx" is pending
Pod	kube-system/ebs-csi-node-x597v			system-node-critical pod "ebs-csi-node-x597v" is pending

Validation Failed
W0912 13:31:37.408840    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 22 lines ...
Pod	kube-system/ebs-csi-node-cxcnx			system-node-critical pod "ebs-csi-node-cxcnx" is pending
Pod	kube-system/ebs-csi-node-t9xf9			system-node-critical pod "ebs-csi-node-t9xf9" is pending
Pod	kube-system/ebs-csi-node-w6t9f			system-node-critical pod "ebs-csi-node-w6t9f" is pending
Pod	kube-system/ebs-csi-node-x597v			system-node-critical pod "ebs-csi-node-x597v" is pending
Pod	kube-system/ebs-csi-node-z6jkt			system-node-critical pod "ebs-csi-node-z6jkt" is pending

Validation Failed
W0912 13:31:49.448810    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 18 lines ...
Pod	kube-system/ebs-csi-node-cxcnx			system-node-critical pod "ebs-csi-node-cxcnx" is pending
Pod	kube-system/ebs-csi-node-t9xf9			system-node-critical pod "ebs-csi-node-t9xf9" is pending
Pod	kube-system/ebs-csi-node-w6t9f			system-node-critical pod "ebs-csi-node-w6t9f" is pending
Pod	kube-system/ebs-csi-node-x597v			system-node-critical pod "ebs-csi-node-x597v" is pending
Pod	kube-system/ebs-csi-node-z6jkt			system-node-critical pod "ebs-csi-node-z6jkt" is pending

Validation Failed
W0912 13:32:01.410791    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 12 lines ...
Pod	kube-system/ebs-csi-node-cxcnx			system-node-critical pod "ebs-csi-node-cxcnx" is pending
Pod	kube-system/ebs-csi-node-t9xf9			system-node-critical pod "ebs-csi-node-t9xf9" is pending
Pod	kube-system/ebs-csi-node-w6t9f			system-node-critical pod "ebs-csi-node-w6t9f" is pending
Pod	kube-system/ebs-csi-node-x597v			system-node-critical pod "ebs-csi-node-x597v" is pending
Pod	kube-system/ebs-csi-node-z6jkt			system-node-critical pod "ebs-csi-node-z6jkt" is pending

Validation Failed
W0912 13:32:13.311406    4134 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	m6g.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	m6g.large	4	4	eu-central-1a

... skipping 688 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 338 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:34:45.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2657" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
STEP: Destroying namespace "apply-5567" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:34:44.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
W0912 13:34:44.490947    4794 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 12 13:34:44.491: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244
[It] should ignore not found error with --for=delete
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
STEP: calling kubectl wait --for=delete
Sep 12 13:34:44.708: INFO: Running '/tmp/kubectl3391257765/kubectl --server=https://api.e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1622 wait --for=delete pod/doesnotexist'
Sep 12 13:34:45.806: INFO: stderr: ""
Sep 12 13:34:45.806: INFO: stdout: ""
Sep 12 13:34:45.806: INFO: Running '/tmp/kubectl3391257765/kubectl --server=https://api.e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-1622 wait --for=delete pod --selector=app.kubernetes.io/name=noexist'
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:34:46.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1622" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:46.694: INFO: Only supported for providers [vsphere] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 55 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:34:44.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680" in namespace "downward-api-7812" to be "Succeeded or Failed"
Sep 12 13:34:45.009: INFO: Pod "downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680": Phase="Pending", Reason="", readiness=false. Elapsed: 109.604744ms
Sep 12 13:34:47.119: INFO: Pod "downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2200819s
Sep 12 13:34:49.229: INFO: Pod "downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329916594s
STEP: Saw pod success
Sep 12 13:34:49.229: INFO: Pod "downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680" satisfied condition "Succeeded or Failed"
Sep 12 13:34:49.338: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680 container client-container: <nil>
STEP: delete the pod
Sep 12 13:34:49.590: INFO: Waiting for pod downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680 to disappear
Sep 12 13:34:49.699: INFO: Pod downwardapi-volume-21e11266-45c0-49f9-83ff-56046bdbf680 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.958 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:50.071: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
Sep 12 13:34:44.754: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-a0500924-2b62-4da3-b81b-fdb76023d01b
STEP: Creating a pod to test consume secrets
Sep 12 13:34:45.214: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351" in namespace "projected-3310" to be "Succeeded or Failed"
Sep 12 13:34:45.323: INFO: Pod "pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351": Phase="Pending", Reason="", readiness=false. Elapsed: 108.377582ms
Sep 12 13:34:47.433: INFO: Pod "pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218310038s
Sep 12 13:34:49.541: INFO: Pod "pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327043826s
STEP: Saw pod success
Sep 12 13:34:49.541: INFO: Pod "pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351" satisfied condition "Succeeded or Failed"
Sep 12 13:34:49.660: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 12 13:34:49.901: INFO: Waiting for pod pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351 to disappear
Sep 12 13:34:50.010: INFO: Pod pod-projected-secrets-a20d3de9-e9bd-4f64-a26a-1b217194e351 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.261 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0912 13:34:45.954983    4876 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 12 13:34:45.955: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 12 13:34:46.282: INFO: Waiting up to 5m0s for pod "pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b" in namespace "emptydir-1561" to be "Succeeded or Failed"
Sep 12 13:34:46.431: INFO: Pod "pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b": Phase="Pending", Reason="", readiness=false. Elapsed: 148.389675ms
Sep 12 13:34:48.560: INFO: Pod "pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278140513s
Sep 12 13:34:50.670: INFO: Pod "pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.387774757s
STEP: Saw pod success
Sep 12 13:34:50.670: INFO: Pod "pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b" satisfied condition "Succeeded or Failed"
Sep 12 13:34:50.785: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b container test-container: <nil>
STEP: delete the pod
Sep 12 13:34:51.044: INFO: Waiting for pod pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b to disappear
Sep 12 13:34:51.154: INFO: Pod pod-cbc81df0-1886-4cb7-92aa-5d4cdc92297b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.335 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:51.499: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
W0912 13:34:45.872800    4709 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 12 13:34:45.872: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 12 13:34:46.202: INFO: Waiting up to 5m0s for pod "security-context-ba651a81-1c73-4726-bd87-0a80f950752a" in namespace "security-context-1259" to be "Succeeded or Failed"
Sep 12 13:34:46.312: INFO: Pod "security-context-ba651a81-1c73-4726-bd87-0a80f950752a": Phase="Pending", Reason="", readiness=false. Elapsed: 109.098542ms
Sep 12 13:34:48.430: INFO: Pod "security-context-ba651a81-1c73-4726-bd87-0a80f950752a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227254203s
Sep 12 13:34:50.541: INFO: Pod "security-context-ba651a81-1c73-4726-bd87-0a80f950752a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.337979069s
STEP: Saw pod success
Sep 12 13:34:50.541: INFO: Pod "security-context-ba651a81-1c73-4726-bd87-0a80f950752a" satisfied condition "Succeeded or Failed"
Sep 12 13:34:50.650: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod security-context-ba651a81-1c73-4726-bd87-0a80f950752a container test-container: <nil>
STEP: delete the pod
Sep 12 13:34:51.183: INFO: Waiting for pod security-context-ba651a81-1c73-4726-bd87-0a80f950752a to disappear
Sep 12 13:34:51.292: INFO: Pod security-context-ba651a81-1c73-4726-bd87-0a80f950752a no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.500 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:51.629: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Sep 12 13:34:47.479: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-1325" to be "Succeeded or Failed"
Sep 12 13:34:47.593: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 113.273182ms
Sep 12 13:34:49.703: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223239711s
Sep 12 13:34:51.814: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334273959s
Sep 12 13:34:53.926: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.447041896s
Sep 12 13:34:53.926: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:34:54.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1325" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":4,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:54.280: INFO: Only supported for providers [gce gke] (not aws)
... skipping 25 lines ...
W0912 13:34:45.304024    4903 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 12 13:34:45.304: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 12 13:34:45.638: INFO: Waiting up to 5m0s for pod "pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34" in namespace "emptydir-825" to be "Succeeded or Failed"
Sep 12 13:34:45.748: INFO: Pod "pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34": Phase="Pending", Reason="", readiness=false. Elapsed: 109.154364ms
Sep 12 13:34:47.857: INFO: Pod "pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218910028s
Sep 12 13:34:49.966: INFO: Pod "pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328136018s
Sep 12 13:34:52.075: INFO: Pod "pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436791366s
Sep 12 13:34:54.185: INFO: Pod "pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.546139307s
STEP: Saw pod success
Sep 12 13:34:54.185: INFO: Pod "pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34" satisfied condition "Succeeded or Failed"
Sep 12 13:34:54.293: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34 container test-container: <nil>
STEP: delete the pod
Sep 12 13:34:54.523: INFO: Waiting for pod pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34 to disappear
Sep 12 13:34:54.631: INFO: Pod pod-b6f1598e-2ed3-4475-9da4-43c3fa0c8b34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.870 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:54.983: INFO: Driver local doesn't support ext4 -- skipping
... skipping 150 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:34:46.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1" in namespace "projected-7724" to be "Succeeded or Failed"
Sep 12 13:34:46.868: INFO: Pod "downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1": Phase="Pending", Reason="", readiness=false. Elapsed: 108.639069ms
Sep 12 13:34:48.988: INFO: Pod "downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22853331s
Sep 12 13:34:51.097: INFO: Pod "downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33759894s
Sep 12 13:34:53.207: INFO: Pod "downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447481779s
Sep 12 13:34:55.317: INFO: Pod "downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.557327283s
STEP: Saw pod success
Sep 12 13:34:55.317: INFO: Pod "downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1" satisfied condition "Succeeded or Failed"
Sep 12 13:34:55.426: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1 container client-container: <nil>
STEP: delete the pod
Sep 12 13:34:55.652: INFO: Waiting for pod downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1 to disappear
Sep 12 13:34:55.762: INFO: Pod downwardapi-volume-59834fdc-dcf8-43de-9339-77f7fc29bcd1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.876 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:56.104: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Sep 12 13:34:51.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 12 13:34:52.328: INFO: Waiting up to 5m0s for pod "pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46" in namespace "emptydir-9625" to be "Succeeded or Failed"
Sep 12 13:34:52.436: INFO: Pod "pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46": Phase="Pending", Reason="", readiness=false. Elapsed: 108.549013ms
Sep 12 13:34:54.545: INFO: Pod "pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21780983s
Sep 12 13:34:56.658: INFO: Pod "pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329995853s
STEP: Saw pod success
Sep 12 13:34:56.658: INFO: Pod "pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46" satisfied condition "Succeeded or Failed"
Sep 12 13:34:56.769: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46 container test-container: <nil>
STEP: delete the pod
Sep 12 13:34:57.010: INFO: Waiting for pod pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46 to disappear
Sep 12 13:34:57.123: INFO: Pod pod-327bfecb-81cb-4cf5-9fb0-9ee9bca54e46 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
[It] should allow exec of files on the volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
Sep 12 13:34:52.058: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 12 13:34:52.058: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-th9k
STEP: Creating a pod to test exec-volume-test
Sep 12 13:34:52.169: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-th9k" in namespace "volume-2720" to be "Succeeded or Failed"
Sep 12 13:34:52.278: INFO: Pod "exec-volume-test-inlinevolume-th9k": Phase="Pending", Reason="", readiness=false. Elapsed: 108.721689ms
Sep 12 13:34:54.387: INFO: Pod "exec-volume-test-inlinevolume-th9k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217952525s
Sep 12 13:34:56.502: INFO: Pod "exec-volume-test-inlinevolume-th9k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332594617s
STEP: Saw pod success
Sep 12 13:34:56.502: INFO: Pod "exec-volume-test-inlinevolume-th9k" satisfied condition "Succeeded or Failed"
Sep 12 13:34:56.611: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod exec-volume-test-inlinevolume-th9k container exec-container-inlinevolume-th9k: <nil>
STEP: delete the pod
Sep 12 13:34:56.872: INFO: Waiting for pod exec-volume-test-inlinevolume-th9k to disappear
Sep 12 13:34:56.987: INFO: Pod exec-volume-test-inlinevolume-th9k no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-th9k
Sep 12 13:34:56.987: INFO: Deleting pod "exec-volume-test-inlinevolume-th9k" in namespace "volume-2720"
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":14,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:57.779: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:34:51.046: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc" in namespace "projected-5022" to be "Succeeded or Failed"
Sep 12 13:34:51.156: INFO: Pod "downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.16647ms
Sep 12 13:34:53.275: INFO: Pod "downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228330443s
Sep 12 13:34:55.387: INFO: Pod "downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340205868s
Sep 12 13:34:57.546: INFO: Pod "downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.499867332s
STEP: Saw pod success
Sep 12 13:34:57.546: INFO: Pod "downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc" satisfied condition "Succeeded or Failed"
Sep 12 13:34:57.660: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc container client-container: <nil>
STEP: delete the pod
Sep 12 13:34:57.890: INFO: Waiting for pod downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc to disappear
Sep 12 13:34:57.998: INFO: Pod downwardapi-volume-9973a30d-d293-4b0f-a39f-67621e4506dc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.868 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:34:58.242: INFO: Only supported for providers [gce gke] (not aws)
... skipping 48 lines ...
W0912 13:34:45.658891    4714 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 12 13:34:45.658: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Sep 12 13:34:45.986: INFO: Waiting up to 5m0s for pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9" in namespace "containers-4034" to be "Succeeded or Failed"
Sep 12 13:34:46.095: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9": Phase="Pending", Reason="", readiness=false. Elapsed: 108.020618ms
Sep 12 13:34:48.204: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217419199s
Sep 12 13:34:50.316: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3295053s
Sep 12 13:34:52.426: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439810675s
Sep 12 13:34:54.535: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548763815s
Sep 12 13:34:56.653: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9": Phase="Running", Reason="", readiness=true. Elapsed: 10.666176283s
Sep 12 13:34:58.766: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.779240694s
STEP: Saw pod success
Sep 12 13:34:58.766: INFO: Pod "client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9" satisfied condition "Succeeded or Failed"
Sep 12 13:34:58.876: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9 container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:34:59.112: INFO: Waiting for pod client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9 to disappear
Sep 12 13:34:59.221: INFO: Pod client-containers-3458d70f-b054-42db-aa94-ed1e013ffde9 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.432 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:15.474 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:02.093: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 127 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:05.819: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Sep 12 13:34:55.730: INFO: PersistentVolumeClaim pvc-r84qd found but phase is Pending instead of Bound.
Sep 12 13:34:57.841: INFO: PersistentVolumeClaim pvc-r84qd found and phase=Bound (2.224054231s)
Sep 12 13:34:57.841: INFO: Waiting up to 3m0s for PersistentVolume local-d4vdw to have phase Bound
Sep 12 13:34:57.955: INFO: PersistentVolume local-d4vdw found and phase=Bound (113.469912ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-gw5w
STEP: Creating a pod to test exec-volume-test
Sep 12 13:34:58.298: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-gw5w" in namespace "volume-8646" to be "Succeeded or Failed"
Sep 12 13:34:58.408: INFO: Pod "exec-volume-test-preprovisionedpv-gw5w": Phase="Pending", Reason="", readiness=false. Elapsed: 109.69448ms
Sep 12 13:35:00.519: INFO: Pod "exec-volume-test-preprovisionedpv-gw5w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220812001s
Sep 12 13:35:02.633: INFO: Pod "exec-volume-test-preprovisionedpv-gw5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.334683003s
STEP: Saw pod success
Sep 12 13:35:02.633: INFO: Pod "exec-volume-test-preprovisionedpv-gw5w" satisfied condition "Succeeded or Failed"
Sep 12 13:35:02.743: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-gw5w container exec-container-preprovisionedpv-gw5w: <nil>
STEP: delete the pod
Sep 12 13:35:02.971: INFO: Waiting for pod exec-volume-test-preprovisionedpv-gw5w to disappear
Sep 12 13:35:03.081: INFO: Pod exec-volume-test-preprovisionedpv-gw5w no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-gw5w
Sep 12 13:35:03.081: INFO: Deleting pod "exec-volume-test-preprovisionedpv-gw5w" in namespace "volume-8646"
... skipping 33 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:34:58.941: INFO: Waiting up to 5m0s for pod "metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3" in namespace "projected-3748" to be "Succeeded or Failed"
Sep 12 13:34:59.050: INFO: Pod "metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 108.290385ms
Sep 12 13:35:01.160: INFO: Pod "metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218902139s
Sep 12 13:35:03.270: INFO: Pod "metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3": Phase="Running", Reason="", readiness=true. Elapsed: 4.328196374s
Sep 12 13:35:05.379: INFO: Pod "metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437249521s
STEP: Saw pod success
Sep 12 13:35:05.379: INFO: Pod "metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3" satisfied condition "Succeeded or Failed"
Sep 12 13:35:05.502: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3 container client-container: <nil>
STEP: delete the pod
Sep 12 13:35:05.725: INFO: Waiting for pod metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3 to disappear
Sep 12 13:35:05.833: INFO: Pod metadata-volume-b7c3692f-e7d8-4e94-93af-ad1e2b884ec3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.771 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:06.067: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
Sep 12 13:34:54.605: INFO: PersistentVolumeClaim pvc-4qmqf found but phase is Pending instead of Bound.
Sep 12 13:34:56.713: INFO: PersistentVolumeClaim pvc-4qmqf found and phase=Bound (6.43083787s)
Sep 12 13:34:56.713: INFO: Waiting up to 3m0s for PersistentVolume local-v5ml8 to have phase Bound
Sep 12 13:34:56.821: INFO: PersistentVolume local-v5ml8 found and phase=Bound (107.254264ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-5msk
STEP: Creating a pod to test exec-volume-test
Sep 12 13:34:57.148: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-5msk" in namespace "volume-4475" to be "Succeeded or Failed"
Sep 12 13:34:57.271: INFO: Pod "exec-volume-test-preprovisionedpv-5msk": Phase="Pending", Reason="", readiness=false. Elapsed: 123.245768ms
Sep 12 13:34:59.380: INFO: Pod "exec-volume-test-preprovisionedpv-5msk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232006361s
Sep 12 13:35:01.488: INFO: Pod "exec-volume-test-preprovisionedpv-5msk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340555274s
Sep 12 13:35:03.597: INFO: Pod "exec-volume-test-preprovisionedpv-5msk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448881192s
Sep 12 13:35:05.705: INFO: Pod "exec-volume-test-preprovisionedpv-5msk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.557005667s
STEP: Saw pod success
Sep 12 13:35:05.705: INFO: Pod "exec-volume-test-preprovisionedpv-5msk" satisfied condition "Succeeded or Failed"
Sep 12 13:35:05.812: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-5msk container exec-container-preprovisionedpv-5msk: <nil>
STEP: delete the pod
Sep 12 13:35:06.043: INFO: Waiting for pod exec-volume-test-preprovisionedpv-5msk to disappear
Sep 12 13:35:06.150: INFO: Pod exec-volume-test-preprovisionedpv-5msk no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-5msk
Sep 12 13:35:06.150: INFO: Deleting pod "exec-volume-test-preprovisionedpv-5msk" in namespace "volume-4475"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:07.769: INFO: Only supported for providers [vsphere] (not aws)
... skipping 92 lines ...
Sep 12 13:35:07.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 12 13:35:08.542: INFO: Waiting up to 5m0s for pod "pod-5998df83-c1f2-4cea-a230-fa76b4dba8b3" in namespace "emptydir-9740" to be "Succeeded or Failed"
Sep 12 13:35:08.649: INFO: Pod "pod-5998df83-c1f2-4cea-a230-fa76b4dba8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 107.025751ms
Sep 12 13:35:10.757: INFO: Pod "pod-5998df83-c1f2-4cea-a230-fa76b4dba8b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.214832047s
STEP: Saw pod success
Sep 12 13:35:10.757: INFO: Pod "pod-5998df83-c1f2-4cea-a230-fa76b4dba8b3" satisfied condition "Succeeded or Failed"
Sep 12 13:35:10.864: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-5998df83-c1f2-4cea-a230-fa76b4dba8b3 container test-container: <nil>
STEP: delete the pod
Sep 12 13:35:11.104: INFO: Waiting for pod pod-5998df83-c1f2-4cea-a230-fa76b4dba8b3 to disappear
Sep 12 13:35:11.213: INFO: Pod pod-5998df83-c1f2-4cea-a230-fa76b4dba8b3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 29 lines ...
• [SLOW TEST:9.552 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:11.703: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 25 lines ...
W0912 13:34:44.531245    4725 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Sep 12 13:34:44.531: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
STEP: Creating a pod to test service account token: 
Sep 12 13:34:44.881: INFO: Waiting up to 5m0s for pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" in namespace "svcaccounts-103" to be "Succeeded or Failed"
Sep 12 13:34:44.990: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 109.839593ms
Sep 12 13:34:47.104: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223337698s
Sep 12 13:34:49.214: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333720945s
STEP: Saw pod success
Sep 12 13:34:49.214: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" satisfied condition "Succeeded or Failed"
Sep 12 13:34:49.324: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:34:49.839: INFO: Waiting for pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b to disappear
Sep 12 13:34:49.948: INFO: Pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b no longer exists
STEP: Creating a pod to test service account token: 
Sep 12 13:34:50.061: INFO: Waiting up to 5m0s for pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" in namespace "svcaccounts-103" to be "Succeeded or Failed"
Sep 12 13:34:50.171: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 109.794711ms
Sep 12 13:34:52.282: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220677638s
Sep 12 13:34:54.392: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330535811s
Sep 12 13:34:56.508: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446143266s
STEP: Saw pod success
Sep 12 13:34:56.508: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" satisfied condition "Succeeded or Failed"
Sep 12 13:34:56.626: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:34:56.888: INFO: Waiting for pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b to disappear
Sep 12 13:34:57.000: INFO: Pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b no longer exists
STEP: Creating a pod to test service account token: 
Sep 12 13:34:57.112: INFO: Waiting up to 5m0s for pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" in namespace "svcaccounts-103" to be "Succeeded or Failed"
Sep 12 13:34:57.226: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 114.257342ms
Sep 12 13:34:59.336: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224160072s
Sep 12 13:35:01.449: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336387485s
Sep 12 13:35:03.560: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.447458606s
STEP: Saw pod success
Sep 12 13:35:03.560: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" satisfied condition "Succeeded or Failed"
Sep 12 13:35:03.669: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:35:03.895: INFO: Waiting for pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b to disappear
Sep 12 13:35:04.005: INFO: Pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b no longer exists
STEP: Creating a pod to test service account token: 
Sep 12 13:35:04.116: INFO: Waiting up to 5m0s for pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" in namespace "svcaccounts-103" to be "Succeeded or Failed"
Sep 12 13:35:04.227: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 110.947549ms
Sep 12 13:35:06.338: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221366549s
Sep 12 13:35:08.450: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333589118s
Sep 12 13:35:10.561: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444867361s
Sep 12 13:35:12.673: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.556315939s
STEP: Saw pod success
Sep 12 13:35:12.673: INFO: Pod "test-pod-32a56754-24d2-4990-8b59-f62a56a4314b" satisfied condition "Succeeded or Failed"
Sep 12 13:35:12.787: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:35:13.019: INFO: Waiting for pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b to disappear
Sep 12 13:35:13.131: INFO: Pod test-pod-32a56754-24d2-4990-8b59-f62a56a4314b no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:29.388 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:488
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:15.835: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 97 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:431
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":2,"skipped":24,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:18.594: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 123 lines ...
• [SLOW TEST:24.567 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":2,"skipped":36,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Sep 12 13:35:09.957: INFO: PersistentVolumeClaim pvc-7wkjk found but phase is Pending instead of Bound.
Sep 12 13:35:12.067: INFO: PersistentVolumeClaim pvc-7wkjk found and phase=Bound (8.551208223s)
Sep 12 13:35:12.067: INFO: Waiting up to 3m0s for PersistentVolume local-pqcnx to have phase Bound
Sep 12 13:35:12.175: INFO: PersistentVolume local-pqcnx found and phase=Bound (108.470754ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-x52q
STEP: Creating a pod to test subpath
Sep 12 13:35:12.506: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-x52q" in namespace "provisioning-9619" to be "Succeeded or Failed"
Sep 12 13:35:12.616: INFO: Pod "pod-subpath-test-preprovisionedpv-x52q": Phase="Pending", Reason="", readiness=false. Elapsed: 110.683341ms
Sep 12 13:35:14.725: INFO: Pod "pod-subpath-test-preprovisionedpv-x52q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219615451s
Sep 12 13:35:16.834: INFO: Pod "pod-subpath-test-preprovisionedpv-x52q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327953568s
Sep 12 13:35:18.943: INFO: Pod "pod-subpath-test-preprovisionedpv-x52q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437542157s
STEP: Saw pod success
Sep 12 13:35:18.943: INFO: Pod "pod-subpath-test-preprovisionedpv-x52q" satisfied condition "Succeeded or Failed"
Sep 12 13:35:19.052: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-x52q container test-container-volume-preprovisionedpv-x52q: <nil>
STEP: delete the pod
Sep 12 13:35:19.281: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-x52q to disappear
Sep 12 13:35:19.389: INFO: Pod pod-subpath-test-preprovisionedpv-x52q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-x52q
Sep 12 13:35:19.390: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-x52q" in namespace "provisioning-9619"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:20.933: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 90 lines ...
Sep 12 13:35:11.457: INFO: PersistentVolumeClaim pvc-68kfw found but phase is Pending instead of Bound.
Sep 12 13:35:13.568: INFO: PersistentVolumeClaim pvc-68kfw found and phase=Bound (14.905901848s)
Sep 12 13:35:13.568: INFO: Waiting up to 3m0s for PersistentVolume local-vxzgn to have phase Bound
Sep 12 13:35:13.678: INFO: PersistentVolume local-vxzgn found and phase=Bound (109.673684ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xjr8
STEP: Creating a pod to test subpath
Sep 12 13:35:14.011: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xjr8" in namespace "provisioning-1151" to be "Succeeded or Failed"
Sep 12 13:35:14.121: INFO: Pod "pod-subpath-test-preprovisionedpv-xjr8": Phase="Pending", Reason="", readiness=false. Elapsed: 109.924919ms
Sep 12 13:35:16.231: INFO: Pod "pod-subpath-test-preprovisionedpv-xjr8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220513472s
Sep 12 13:35:18.345: INFO: Pod "pod-subpath-test-preprovisionedpv-xjr8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33402061s
Sep 12 13:35:20.455: INFO: Pod "pod-subpath-test-preprovisionedpv-xjr8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.444663834s
STEP: Saw pod success
Sep 12 13:35:20.455: INFO: Pod "pod-subpath-test-preprovisionedpv-xjr8" satisfied condition "Succeeded or Failed"
Sep 12 13:35:20.565: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-xjr8 container test-container-volume-preprovisionedpv-xjr8: <nil>
STEP: delete the pod
Sep 12 13:35:20.795: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xjr8 to disappear
Sep 12 13:35:20.905: INFO: Pod pod-subpath-test-preprovisionedpv-xjr8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xjr8
Sep 12 13:35:20.905: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xjr8" in namespace "provisioning-1151"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:22.474: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":4,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:22.735: INFO: Only supported for providers [openstack] (not aws)
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 12 13:35:12.385: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801" in namespace "security-context-test-4876" to be "Succeeded or Failed"
Sep 12 13:35:12.492: INFO: Pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801": Phase="Pending", Reason="", readiness=false. Elapsed: 107.266187ms
Sep 12 13:35:14.601: INFO: Pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216233839s
Sep 12 13:35:16.719: INFO: Pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334028582s
Sep 12 13:35:18.832: INFO: Pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447506369s
Sep 12 13:35:20.944: INFO: Pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559759595s
Sep 12 13:35:23.075: INFO: Pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.690629382s
Sep 12 13:35:23.075: INFO: Pod "alpine-nnp-false-fca6d821-0b0d-467d-983b-651da1eb2801" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:35:23.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4876" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:23.429: INFO: Only supported for providers [openstack] (not aws)
... skipping 110 lines ...
Sep 12 13:35:09.593: INFO: PersistentVolumeClaim pvc-wlfhb found but phase is Pending instead of Bound.
Sep 12 13:35:11.704: INFO: PersistentVolumeClaim pvc-wlfhb found and phase=Bound (10.672719987s)
Sep 12 13:35:11.704: INFO: Waiting up to 3m0s for PersistentVolume local-9xb6m to have phase Bound
Sep 12 13:35:11.814: INFO: PersistentVolume local-9xb6m found and phase=Bound (110.288819ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-npr9
STEP: Creating a pod to test subpath
Sep 12 13:35:12.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-npr9" in namespace "provisioning-4875" to be "Succeeded or Failed"
Sep 12 13:35:12.257: INFO: Pod "pod-subpath-test-preprovisionedpv-npr9": Phase="Pending", Reason="", readiness=false. Elapsed: 110.020367ms
Sep 12 13:35:14.368: INFO: Pod "pod-subpath-test-preprovisionedpv-npr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220784169s
Sep 12 13:35:16.478: INFO: Pod "pod-subpath-test-preprovisionedpv-npr9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331526384s
Sep 12 13:35:18.589: INFO: Pod "pod-subpath-test-preprovisionedpv-npr9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441859325s
Sep 12 13:35:20.712: INFO: Pod "pod-subpath-test-preprovisionedpv-npr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.565445806s
STEP: Saw pod success
Sep 12 13:35:20.712: INFO: Pod "pod-subpath-test-preprovisionedpv-npr9" satisfied condition "Succeeded or Failed"
Sep 12 13:35:20.827: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-npr9 container test-container-subpath-preprovisionedpv-npr9: <nil>
STEP: delete the pod
Sep 12 13:35:21.063: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-npr9 to disappear
Sep 12 13:35:21.173: INFO: Pod pod-subpath-test-preprovisionedpv-npr9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-npr9
Sep 12 13:35:21.173: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-npr9" in namespace "provisioning-4875"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":11,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:23.536: INFO: Only supported for providers [gce gke] (not aws)
... skipping 211 lines ...
Sep 12 13:35:10.849: INFO: PersistentVolumeClaim pvc-qxwcn found but phase is Pending instead of Bound.
Sep 12 13:35:12.959: INFO: PersistentVolumeClaim pvc-qxwcn found and phase=Bound (4.329329263s)
Sep 12 13:35:12.959: INFO: Waiting up to 3m0s for PersistentVolume local-wqtn5 to have phase Bound
Sep 12 13:35:13.068: INFO: PersistentVolume local-wqtn5 found and phase=Bound (109.097518ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-btw8
STEP: Creating a pod to test subpath
Sep 12 13:35:13.396: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-btw8" in namespace "provisioning-2570" to be "Succeeded or Failed"
Sep 12 13:35:13.505: INFO: Pod "pod-subpath-test-preprovisionedpv-btw8": Phase="Pending", Reason="", readiness=false. Elapsed: 108.610344ms
Sep 12 13:35:15.615: INFO: Pod "pod-subpath-test-preprovisionedpv-btw8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219336133s
Sep 12 13:35:17.726: INFO: Pod "pod-subpath-test-preprovisionedpv-btw8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329673482s
Sep 12 13:35:19.837: INFO: Pod "pod-subpath-test-preprovisionedpv-btw8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440903356s
Sep 12 13:35:21.950: INFO: Pod "pod-subpath-test-preprovisionedpv-btw8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553640419s
STEP: Saw pod success
Sep 12 13:35:21.950: INFO: Pod "pod-subpath-test-preprovisionedpv-btw8" satisfied condition "Succeeded or Failed"
Sep 12 13:35:22.059: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-btw8 container test-container-subpath-preprovisionedpv-btw8: <nil>
STEP: delete the pod
Sep 12 13:35:22.296: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-btw8 to disappear
Sep 12 13:35:22.405: INFO: Pod pod-subpath-test-preprovisionedpv-btw8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-btw8
Sep 12 13:35:22.405: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-btw8" in namespace "provisioning-2570"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":3,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:35:25.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-5300" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":5,"skipped":45,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:25.690: INFO: Only supported for providers [gce gke] (not aws)
... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:26.364: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:28.082: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 105 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-d63470e5-3fde-48a6-9361-b63a9b438bbe
STEP: Creating a pod to test consume configMaps
Sep 12 13:35:23.551: INFO: Waiting up to 5m0s for pod "pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8" in namespace "configmap-671" to be "Succeeded or Failed"
Sep 12 13:35:23.659: INFO: Pod "pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8": Phase="Pending", Reason="", readiness=false. Elapsed: 107.928987ms
Sep 12 13:35:25.769: INFO: Pod "pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217896906s
Sep 12 13:35:27.878: INFO: Pod "pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327118606s
STEP: Saw pod success
Sep 12 13:35:27.878: INFO: Pod "pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8" satisfied condition "Succeeded or Failed"
Sep 12 13:35:27.986: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8 container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:35:28.214: INFO: Waiting for pod pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8 to disappear
Sep 12 13:35:28.322: INFO: Pod pod-configmaps-75b2b3f7-3899-4d6a-a725-841e244cdfd8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.816 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Sep 12 13:35:26.531: INFO: Creating a PV followed by a PVC
Sep 12 13:35:26.782: INFO: Waiting for PV local-pv22sm5 to bind to PVC pvc-6tpnj
Sep 12 13:35:26.782: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6tpnj] to have phase Bound
Sep 12 13:35:26.890: INFO: PersistentVolumeClaim pvc-6tpnj found and phase=Bound (108.361925ms)
Sep 12 13:35:26.890: INFO: Waiting up to 3m0s for PersistentVolume local-pv22sm5 to have phase Bound
Sep 12 13:35:26.999: INFO: PersistentVolume local-pv22sm5 found and phase=Bound (109.259592ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
Sep 12 13:35:27.327: INFO: Waiting up to 5m0s for pod "pod-e2a9f293-850a-4b14-bec6-bfd0f94eb9e2" in namespace "persistent-local-volumes-test-9950" to be "Unschedulable"
Sep 12 13:35:27.436: INFO: Pod "pod-e2a9f293-850a-4b14-bec6-bfd0f94eb9e2": Phase="Pending", Reason="", readiness=false. Elapsed: 108.911768ms
Sep 12 13:35:27.436: INFO: Pod "pod-e2a9f293-850a-4b14-bec6-bfd0f94eb9e2" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:10.176 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":3,"skipped":43,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:28.848: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
Sep 12 13:35:29.390: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.772 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 135 lines ...
• [SLOW TEST:8.598 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:32.173: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 105 lines ...
• [SLOW TEST:7.538 seconds]
[sig-api-machinery] Generated clientset
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/generated_clientset.go:103
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:33.925: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:35:34.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-5967" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":4,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:35.149: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 130 lines ...
Sep 12 13:35:00.995: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-z7m85] to have phase Bound
Sep 12 13:35:01.109: INFO: PersistentVolumeClaim pvc-z7m85 found and phase=Bound (114.304378ms)
STEP: Deleting the previously created pod
Sep 12 13:35:11.661: INFO: Deleting pod "pvc-volume-tester-m68f2" in namespace "csi-mock-volumes-9248"
Sep 12 13:35:11.770: INFO: Wait up to 5m0s for pod "pvc-volume-tester-m68f2" to be fully deleted
STEP: Checking CSI driver logs
Sep 12 13:35:16.109: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/74813ba3-26be-4df0-95fc-6230779d0b00/volumes/kubernetes.io~csi/pvc-0b280eae-2b2b-49bd-b673-77a3d9361582/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-m68f2
Sep 12 13:35:16.109: INFO: Deleting pod "pvc-volume-tester-m68f2" in namespace "csi-mock-volumes-9248"
STEP: Deleting claim pvc-z7m85
Sep 12 13:35:16.436: INFO: Waiting up to 2m0s for PersistentVolume pvc-0b280eae-2b2b-49bd-b673-77a3d9361582 to get deleted
Sep 12 13:35:16.612: INFO: PersistentVolume pvc-0b280eae-2b2b-49bd-b673-77a3d9361582 found and phase=Released (176.251172ms)
Sep 12 13:35:18.726: INFO: PersistentVolume pvc-0b280eae-2b2b-49bd-b673-77a3d9361582 found and phase=Released (2.289997499s)
... skipping 92 lines ...
• [SLOW TEST:10.024 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Replicaset Status endpoints [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":4,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:38.902: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 78 lines ...
• [SLOW TEST:17.019 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":6,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:42.725: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 230 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256
    CSIStorageCapacity unused
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:44.328: INFO: Only supported for providers [openstack] (not aws)
... skipping 46 lines ...
Sep 12 13:35:25.987: INFO: PersistentVolumeClaim pvc-mx2x7 found but phase is Pending instead of Bound.
Sep 12 13:35:28.097: INFO: PersistentVolumeClaim pvc-mx2x7 found and phase=Bound (8.549559114s)
Sep 12 13:35:28.097: INFO: Waiting up to 3m0s for PersistentVolume local-m2zsh to have phase Bound
Sep 12 13:35:28.206: INFO: PersistentVolume local-m2zsh found and phase=Bound (108.699265ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-f8c2
STEP: Creating a pod to test subpath
Sep 12 13:35:28.570: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-f8c2" in namespace "provisioning-9679" to be "Succeeded or Failed"
Sep 12 13:35:28.709: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 138.792951ms
Sep 12 13:35:30.827: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256334633s
Sep 12 13:35:32.943: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372416094s
Sep 12 13:35:35.053: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482908367s
Sep 12 13:35:37.166: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595426708s
Sep 12 13:35:39.276: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.705446088s
Sep 12 13:35:41.385: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.814681946s
STEP: Saw pod success
Sep 12 13:35:41.385: INFO: Pod "pod-subpath-test-preprovisionedpv-f8c2" satisfied condition "Succeeded or Failed"
Sep 12 13:35:41.494: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-f8c2 container test-container-subpath-preprovisionedpv-f8c2: <nil>
STEP: delete the pod
Sep 12 13:35:41.749: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-f8c2 to disappear
Sep 12 13:35:41.858: INFO: Pod pod-subpath-test-preprovisionedpv-f8c2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-f8c2
Sep 12 13:35:41.858: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-f8c2" in namespace "provisioning-9679"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:11.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":4,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:44.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 12 13:35:45.011: INFO: Waiting up to 5m0s for pod "pod-3f3a8be1-c207-4cfa-aad0-f95648489e7d" in namespace "emptydir-5968" to be "Succeeded or Failed"
Sep 12 13:35:45.120: INFO: Pod "pod-3f3a8be1-c207-4cfa-aad0-f95648489e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 108.922695ms
Sep 12 13:35:47.238: INFO: Pod "pod-3f3a8be1-c207-4cfa-aad0-f95648489e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.226988478s
STEP: Saw pod success
Sep 12 13:35:47.239: INFO: Pod "pod-3f3a8be1-c207-4cfa-aad0-f95648489e7d" satisfied condition "Succeeded or Failed"
Sep 12 13:35:47.348: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-3f3a8be1-c207-4cfa-aad0-f95648489e7d container test-container: <nil>
STEP: delete the pod
Sep 12 13:35:47.576: INFO: Waiting for pod pod-3f3a8be1-c207-4cfa-aad0-f95648489e7d to disappear
Sep 12 13:35:47.698: INFO: Pod pod-3f3a8be1-c207-4cfa-aad0-f95648489e7d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:35:47.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5968" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:47.950: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 14 lines ...
      Driver hostPathSymlink doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:35.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:457
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":5,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:48.415: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 88 lines ...
Sep 12 13:35:45.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 12 13:35:46.329: INFO: Waiting up to 5m0s for pod "pod-31b75752-c1d6-4696-b24d-2a471a19d2c9" in namespace "emptydir-1830" to be "Succeeded or Failed"
Sep 12 13:35:46.439: INFO: Pod "pod-31b75752-c1d6-4696-b24d-2a471a19d2c9": Phase="Pending", Reason="", readiness=false. Elapsed: 110.000082ms
Sep 12 13:35:48.548: INFO: Pod "pod-31b75752-c1d6-4696-b24d-2a471a19d2c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219329447s
STEP: Saw pod success
Sep 12 13:35:48.549: INFO: Pod "pod-31b75752-c1d6-4696-b24d-2a471a19d2c9" satisfied condition "Succeeded or Failed"
Sep 12 13:35:48.659: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-31b75752-c1d6-4696-b24d-2a471a19d2c9 container test-container: <nil>
STEP: delete the pod
Sep 12 13:35:48.898: INFO: Waiting for pod pod-31b75752-c1d6-4696-b24d-2a471a19d2c9 to disappear
Sep 12 13:35:49.008: INFO: Pod pod-31b75752-c1d6-4696-b24d-2a471a19d2c9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:35:49.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1830" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:35:49.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-2362" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:49.416: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 87 lines ...
• [SLOW TEST:31.731 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:42.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 47 lines ...
Sep 12 13:34:45.744: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9718xjfcl
STEP: creating a claim
Sep 12 13:34:45.854: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-2b7n
STEP: Creating a pod to test atomic-volume-subpath
Sep 12 13:34:46.184: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-2b7n" in namespace "provisioning-9718" to be "Succeeded or Failed"
Sep 12 13:34:46.293: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Pending", Reason="", readiness=false. Elapsed: 108.595347ms
Sep 12 13:34:48.427: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242579725s
Sep 12 13:34:50.539: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354360389s
Sep 12 13:34:52.649: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463992447s
Sep 12 13:34:54.758: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.573573342s
Sep 12 13:34:56.889: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.704023371s
... skipping 14 lines ...
Sep 12 13:35:28.557: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Running", Reason="", readiness=true. Elapsed: 42.372200688s
Sep 12 13:35:30.674: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Running", Reason="", readiness=true. Elapsed: 44.489283249s
Sep 12 13:35:32.784: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Running", Reason="", readiness=true. Elapsed: 46.599136787s
Sep 12 13:35:34.893: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Running", Reason="", readiness=true. Elapsed: 48.70878346s
Sep 12 13:35:37.011: INFO: Pod "pod-subpath-test-dynamicpv-2b7n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.82652192s
STEP: Saw pod success
Sep 12 13:35:37.011: INFO: Pod "pod-subpath-test-dynamicpv-2b7n" satisfied condition "Succeeded or Failed"
Sep 12 13:35:37.128: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-2b7n container test-container-subpath-dynamicpv-2b7n: <nil>
STEP: delete the pod
Sep 12 13:35:37.409: INFO: Waiting for pod pod-subpath-test-dynamicpv-2b7n to disappear
Sep 12 13:35:37.517: INFO: Pod pod-subpath-test-dynamicpv-2b7n no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-2b7n
Sep 12 13:35:37.517: INFO: Deleting pod "pod-subpath-test-dynamicpv-2b7n" in namespace "provisioning-9718"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:53.966: INFO: Only supported for providers [gce gke] (not aws)
... skipping 160 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should create and stop a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":7,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Sep 12 13:35:25.871: INFO: PersistentVolumeClaim pvc-ddw72 found but phase is Pending instead of Bound.
Sep 12 13:35:27.980: INFO: PersistentVolumeClaim pvc-ddw72 found and phase=Bound (2.218686326s)
Sep 12 13:35:27.980: INFO: Waiting up to 3m0s for PersistentVolume local-jd94n to have phase Bound
Sep 12 13:35:28.089: INFO: PersistentVolume local-jd94n found and phase=Bound (108.465908ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8qfx
STEP: Creating a pod to test atomic-volume-subpath
Sep 12 13:35:28.433: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8qfx" in namespace "provisioning-1676" to be "Succeeded or Failed"
Sep 12 13:35:28.545: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Pending", Reason="", readiness=false. Elapsed: 112.528849ms
Sep 12 13:35:30.657: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224527918s
Sep 12 13:35:32.767: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 4.334121035s
Sep 12 13:35:34.876: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 6.443144133s
Sep 12 13:35:37.005: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 8.572455401s
Sep 12 13:35:39.115: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 10.682562759s
Sep 12 13:35:41.224: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 12.791391902s
Sep 12 13:35:43.334: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 14.90142291s
Sep 12 13:35:45.450: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 17.017123519s
Sep 12 13:35:47.559: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 19.126720519s
Sep 12 13:35:49.668: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Running", Reason="", readiness=true. Elapsed: 21.235324828s
Sep 12 13:35:51.778: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.345745015s
STEP: Saw pod success
Sep 12 13:35:51.778: INFO: Pod "pod-subpath-test-preprovisionedpv-8qfx" satisfied condition "Succeeded or Failed"
Sep 12 13:35:51.887: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-8qfx container test-container-subpath-preprovisionedpv-8qfx: <nil>
STEP: delete the pod
Sep 12 13:35:52.110: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8qfx to disappear
Sep 12 13:35:52.219: INFO: Pod pod-subpath-test-preprovisionedpv-8qfx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8qfx
Sep 12 13:35:52.219: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8qfx" in namespace "provisioning-1676"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:55.408: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:05.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
• [SLOW TEST:50.735 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:56.725: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":81,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:35:59.146: INFO: Only supported for providers [openstack] (not aws)
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:00.130: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Sep 12 13:35:56.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 12 13:35:57.425: INFO: Waiting up to 5m0s for pod "security-context-0fa046cd-da1c-4d8f-923f-d142339948c5" in namespace "security-context-2130" to be "Succeeded or Failed"
Sep 12 13:35:57.536: INFO: Pod "security-context-0fa046cd-da1c-4d8f-923f-d142339948c5": Phase="Pending", Reason="", readiness=false. Elapsed: 111.147688ms
Sep 12 13:35:59.646: INFO: Pod "security-context-0fa046cd-da1c-4d8f-923f-d142339948c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.221333684s
STEP: Saw pod success
Sep 12 13:35:59.647: INFO: Pod "security-context-0fa046cd-da1c-4d8f-923f-d142339948c5" satisfied condition "Succeeded or Failed"
Sep 12 13:35:59.760: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod security-context-0fa046cd-da1c-4d8f-923f-d142339948c5 container test-container: <nil>
STEP: delete the pod
Sep 12 13:35:59.994: INFO: Waiting for pod security-context-0fa046cd-da1c-4d8f-923f-d142339948c5 to disappear
Sep 12 13:36:00.105: INFO: Pod security-context-0fa046cd-da1c-4d8f-923f-d142339948c5 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:00.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-2130" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:00.339: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:01.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8211" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":7,"skipped":89,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:01.824: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:37.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":5,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:80.501 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:278
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:01.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:249
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:04.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 103 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:629
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":6,"skipped":33,"failed":0}

SSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":8,"skipped":97,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:04.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:05.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9814" for this suite.

•SS
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":9,"skipped":97,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:05.849: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 82 lines ...
STEP: Destroying namespace "services-3643" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":10,"skipped":98,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 90 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:36:06.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5e78bac-5d7c-4e86-af8e-a25541a10330" in namespace "projected-4892" to be "Succeeded or Failed"
Sep 12 13:36:06.651: INFO: Pod "downwardapi-volume-c5e78bac-5d7c-4e86-af8e-a25541a10330": Phase="Pending", Reason="", readiness=false. Elapsed: 110.091873ms
Sep 12 13:36:08.761: INFO: Pod "downwardapi-volume-c5e78bac-5d7c-4e86-af8e-a25541a10330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220643097s
STEP: Saw pod success
Sep 12 13:36:08.761: INFO: Pod "downwardapi-volume-c5e78bac-5d7c-4e86-af8e-a25541a10330" satisfied condition "Succeeded or Failed"
Sep 12 13:36:08.871: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod downwardapi-volume-c5e78bac-5d7c-4e86-af8e-a25541a10330 container client-container: <nil>
STEP: delete the pod
Sep 12 13:36:09.098: INFO: Waiting for pod downwardapi-volume-c5e78bac-5d7c-4e86-af8e-a25541a10330 to disappear
Sep 12 13:36:09.208: INFO: Pod downwardapi-volume-c5e78bac-5d7c-4e86-af8e-a25541a10330 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:09.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4892" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:09.453: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 99 lines ...
STEP: Destroying namespace "pod-disks-8368" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.772 seconds]
[sig-storage] Pod Disks
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:449

  Requires at least 2 nodes (not 0)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pd.go:75
------------------------------
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":4,"skipped":31,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:10.178: INFO: Only supported for providers [gce gke] (not aws)
... skipping 56 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:52.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:18.433 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:10.930: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 14 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":8,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:05.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:36:05.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed" in namespace "downward-api-7876" to be "Succeeded or Failed"
Sep 12 13:36:06.077: INFO: Pod "downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed": Phase="Pending", Reason="", readiness=false. Elapsed: 107.492838ms
Sep 12 13:36:08.185: INFO: Pod "downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215343998s
Sep 12 13:36:10.293: INFO: Pod "downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.322974969s
STEP: Saw pod success
Sep 12 13:36:10.293: INFO: Pod "downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed" satisfied condition "Succeeded or Failed"
Sep 12 13:36:10.400: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed container client-container: <nil>
STEP: delete the pod
Sep 12 13:36:10.620: INFO: Waiting for pod downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed to disappear
Sep 12 13:36:10.728: INFO: Pod downwardapi-volume-d08adadc-5e19-431b-8c54-df20386234ed no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 82 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 212 lines ...
Sep 12 13:35:53.808: INFO: stderr: ""
Sep 12 13:35:53.808: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Sep 12 13:35:53.808: INFO: Waiting for all frontend pods to be Running.
Sep 12 13:35:58.960: INFO: Waiting for frontend to serve content.
Sep 12 13:35:59.073: INFO: Trying to add a new entry to the guestbook.
Sep 12 13:36:04.188: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Sep 12 13:36:09.301: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Sep 12 13:36:09.418: INFO: Running '/tmp/kubectl3391257765/kubectl --server=https://api.e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-9040 delete --grace-period=0 --force -f -'
Sep 12 13:36:09.955: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 12 13:36:09.955: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
... skipping 27 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:12.712: INFO: >>> kubeConfig: /root/.kube/config
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:13.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-8424" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}

SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
... skipping 126 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":1,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:15.419: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 12 13:36:10.897: INFO: Waiting up to 5m0s for pod "pod-3e0a2343-6b65-4005-947c-e8f09677f241" in namespace "emptydir-2572" to be "Succeeded or Failed"
Sep 12 13:36:11.029: INFO: Pod "pod-3e0a2343-6b65-4005-947c-e8f09677f241": Phase="Pending", Reason="", readiness=false. Elapsed: 132.022735ms
Sep 12 13:36:13.138: INFO: Pod "pod-3e0a2343-6b65-4005-947c-e8f09677f241": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240918271s
Sep 12 13:36:15.248: INFO: Pod "pod-3e0a2343-6b65-4005-947c-e8f09677f241": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350677074s
Sep 12 13:36:17.360: INFO: Pod "pod-3e0a2343-6b65-4005-947c-e8f09677f241": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.463443399s
STEP: Saw pod success
Sep 12 13:36:17.360: INFO: Pod "pod-3e0a2343-6b65-4005-947c-e8f09677f241" satisfied condition "Succeeded or Failed"
Sep 12 13:36:17.469: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-3e0a2343-6b65-4005-947c-e8f09677f241 container test-container: <nil>
STEP: delete the pod
Sep 12 13:36:17.695: INFO: Waiting for pod pod-3e0a2343-6b65-4005-947c-e8f09677f241 to disappear
Sep 12 13:36:17.805: INFO: Pod pod-3e0a2343-6b65-4005-947c-e8f09677f241 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    new files should be created with FSGroup ownership when container is root
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:55
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":5,"skipped":45,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:18.056: INFO: Only supported for providers [gce gke] (not aws)
... skipping 141 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 180 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445

      Driver hostPath doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:08.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:19.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5258" for this suite.


• [SLOW TEST:10.997 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:19.609: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:44.585 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:185
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":5,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:19.767: INFO: Only supported for providers [gce gke] (not aws)
... skipping 87 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:36:12.486: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842" in namespace "projected-2656" to be "Succeeded or Failed"
Sep 12 13:36:12.593: INFO: Pod "downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842": Phase="Pending", Reason="", readiness=false. Elapsed: 107.020037ms
Sep 12 13:36:14.700: INFO: Pod "downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213922946s
Sep 12 13:36:16.808: INFO: Pod "downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321974771s
Sep 12 13:36:18.917: INFO: Pod "downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842": Phase="Pending", Reason="", readiness=false. Elapsed: 6.430775901s
Sep 12 13:36:21.041: INFO: Pod "downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.555054802s
STEP: Saw pod success
Sep 12 13:36:21.041: INFO: Pod "downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842" satisfied condition "Succeeded or Failed"
Sep 12 13:36:21.175: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842 container client-container: <nil>
STEP: delete the pod
Sep 12 13:36:21.548: INFO: Waiting for pod downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842 to disappear
Sep 12 13:36:21.741: INFO: Pod downwardapi-volume-6b2d45a7-4645-492a-a95f-5b98fbcc1842 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.187 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":73,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:22.063: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":30,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:22.471: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Sep 12 13:36:13.323: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 12 13:36:13.323: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-wq87
STEP: Creating a pod to test subpath
Sep 12 13:36:13.436: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-wq87" in namespace "provisioning-5116" to be "Succeeded or Failed"
Sep 12 13:36:13.545: INFO: Pod "pod-subpath-test-inlinevolume-wq87": Phase="Pending", Reason="", readiness=false. Elapsed: 109.185514ms
Sep 12 13:36:15.654: INFO: Pod "pod-subpath-test-inlinevolume-wq87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218022262s
Sep 12 13:36:17.763: INFO: Pod "pod-subpath-test-inlinevolume-wq87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326950717s
Sep 12 13:36:19.873: INFO: Pod "pod-subpath-test-inlinevolume-wq87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437006279s
Sep 12 13:36:22.014: INFO: Pod "pod-subpath-test-inlinevolume-wq87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.577880001s
Sep 12 13:36:24.175: INFO: Pod "pod-subpath-test-inlinevolume-wq87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.738792388s
STEP: Saw pod success
Sep 12 13:36:24.175: INFO: Pod "pod-subpath-test-inlinevolume-wq87" satisfied condition "Succeeded or Failed"
Sep 12 13:36:24.310: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-wq87 container test-container-subpath-inlinevolume-wq87: <nil>
STEP: delete the pod
Sep 12 13:36:24.773: INFO: Waiting for pod pod-subpath-test-inlinevolume-wq87 to disappear
Sep 12 13:36:24.886: INFO: Pod pod-subpath-test-inlinevolume-wq87 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-wq87
Sep 12 13:36:24.886: INFO: Deleting pod "pod-subpath-test-inlinevolume-wq87" in namespace "provisioning-5116"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:25.568: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-3344140f-e1f8-47a6-8d3e-bf1c752cfcde
STEP: Creating a pod to test consume secrets
Sep 12 13:36:16.198: INFO: Waiting up to 5m0s for pod "pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c" in namespace "secrets-4654" to be "Succeeded or Failed"
Sep 12 13:36:16.306: INFO: Pod "pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c": Phase="Pending", Reason="", readiness=false. Elapsed: 108.071473ms
Sep 12 13:36:18.413: INFO: Pod "pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215332481s
Sep 12 13:36:20.521: INFO: Pod "pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323485468s
Sep 12 13:36:22.658: INFO: Pod "pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46027782s
Sep 12 13:36:24.775: INFO: Pod "pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.577084225s
STEP: Saw pod success
Sep 12 13:36:24.775: INFO: Pod "pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c" satisfied condition "Succeeded or Failed"
Sep 12 13:36:24.887: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c container secret-volume-test: <nil>
STEP: delete the pod
Sep 12 13:36:25.343: INFO: Waiting for pod pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c to disappear
Sep 12 13:36:25.483: INFO: Pod pod-secrets-0aae0996-c025-4504-b828-7e670bddb07c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.313 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep 12 13:36:19.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
Sep 12 13:36:20.324: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 12 13:36:20.555: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5104" in namespace "provisioning-5104" to be "Succeeded or Failed"
Sep 12 13:36:20.663: INFO: Pod "hostpath-symlink-prep-provisioning-5104": Phase="Pending", Reason="", readiness=false. Elapsed: 108.626938ms
Sep 12 13:36:22.794: INFO: Pod "hostpath-symlink-prep-provisioning-5104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.238742521s
STEP: Saw pod success
Sep 12 13:36:22.794: INFO: Pod "hostpath-symlink-prep-provisioning-5104" satisfied condition "Succeeded or Failed"
Sep 12 13:36:22.794: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5104" in namespace "provisioning-5104"
Sep 12 13:36:22.911: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5104" to be fully deleted
Sep 12 13:36:23.027: INFO: Creating resource for inline volume
Sep 12 13:36:23.027: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Sep 12 13:36:23.027: INFO: Deleting pod "pod-subpath-test-inlinevolume-6vsj" in namespace "provisioning-5104"
Sep 12 13:36:23.256: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5104" in namespace "provisioning-5104" to be "Succeeded or Failed"
Sep 12 13:36:23.372: INFO: Pod "hostpath-symlink-prep-provisioning-5104": Phase="Pending", Reason="", readiness=false. Elapsed: 115.227977ms
Sep 12 13:36:25.493: INFO: Pod "hostpath-symlink-prep-provisioning-5104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.236822507s
STEP: Saw pod success
Sep 12 13:36:25.493: INFO: Pod "hostpath-symlink-prep-provisioning-5104" satisfied condition "Succeeded or Failed"
Sep 12 13:36:25.493: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5104" in namespace "provisioning-5104"
Sep 12 13:36:25.667: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5104" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:25.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5104" for this suite.
... skipping 62 lines ...
Sep 12 13:35:56.093: INFO: PersistentVolumeClaim pvc-x7gqj found but phase is Pending instead of Bound.
Sep 12 13:35:58.202: INFO: PersistentVolumeClaim pvc-x7gqj found and phase=Bound (8.545295839s)
Sep 12 13:35:58.202: INFO: Waiting up to 3m0s for PersistentVolume local-xgcbg to have phase Bound
Sep 12 13:35:58.309: INFO: PersistentVolume local-xgcbg found and phase=Bound (107.483407ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n68n
STEP: Creating a pod to test atomic-volume-subpath
Sep 12 13:35:58.637: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n68n" in namespace "provisioning-9780" to be "Succeeded or Failed"
Sep 12 13:35:58.746: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Pending", Reason="", readiness=false. Elapsed: 109.036974ms
Sep 12 13:36:00.854: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21735946s
Sep 12 13:36:02.963: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 4.325738984s
Sep 12 13:36:05.071: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 6.433975931s
Sep 12 13:36:07.180: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 8.543386804s
Sep 12 13:36:09.288: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 10.651186728s
... skipping 2 lines ...
Sep 12 13:36:15.615: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 16.977779875s
Sep 12 13:36:17.732: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 19.094886095s
Sep 12 13:36:19.841: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 21.2037256s
Sep 12 13:36:21.983: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Running", Reason="", readiness=true. Elapsed: 23.346401506s
Sep 12 13:36:24.112: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.474693102s
STEP: Saw pod success
Sep 12 13:36:24.112: INFO: Pod "pod-subpath-test-preprovisionedpv-n68n" satisfied condition "Succeeded or Failed"
Sep 12 13:36:24.274: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-n68n container test-container-subpath-preprovisionedpv-n68n: <nil>
STEP: delete the pod
Sep 12 13:36:24.753: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n68n to disappear
Sep 12 13:36:24.868: INFO: Pod pod-subpath-test-preprovisionedpv-n68n no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n68n
Sep 12 13:36:24.868: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n68n" in namespace "provisioning-9780"
... skipping 29 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-ace7743f-5803-4e59-ac0d-976300d24fe7
STEP: Creating a pod to test consume secrets
Sep 12 13:36:22.949: INFO: Waiting up to 5m0s for pod "pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3" in namespace "secrets-1778" to be "Succeeded or Failed"
Sep 12 13:36:23.064: INFO: Pod "pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 114.618184ms
Sep 12 13:36:25.219: INFO: Pod "pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269766207s
Sep 12 13:36:27.397: INFO: Pod "pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.447535333s
STEP: Saw pod success
Sep 12 13:36:27.397: INFO: Pod "pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3" satisfied condition "Succeeded or Failed"
Sep 12 13:36:27.665: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3 container secret-volume-test: <nil>
STEP: delete the pod
Sep 12 13:36:27.975: INFO: Waiting for pod pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3 to disappear
Sep 12 13:36:28.088: INFO: Pod pod-secrets-6e6d5cd4-0eca-4957-80d5-8f9e90fea8a3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.249 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":82,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:28.342: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 65 lines ...
Sep 12 13:36:10.755: INFO: PersistentVolumeClaim pvc-fjglx found but phase is Pending instead of Bound.
Sep 12 13:36:12.865: INFO: PersistentVolumeClaim pvc-fjglx found and phase=Bound (14.880895522s)
Sep 12 13:36:12.865: INFO: Waiting up to 3m0s for PersistentVolume local-7q269 to have phase Bound
Sep 12 13:36:12.988: INFO: PersistentVolume local-7q269 found and phase=Bound (123.497468ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fl8f
STEP: Creating a pod to test subpath
Sep 12 13:36:13.316: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fl8f" in namespace "provisioning-7512" to be "Succeeded or Failed"
Sep 12 13:36:13.425: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f": Phase="Pending", Reason="", readiness=false. Elapsed: 108.474659ms
Sep 12 13:36:15.534: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217586359s
Sep 12 13:36:17.644: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328194026s
Sep 12 13:36:19.757: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441021775s
Sep 12 13:36:21.867: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55116059s
Sep 12 13:36:23.986: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.670186181s
Sep 12 13:36:26.098: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.782182332s
STEP: Saw pod success
Sep 12 13:36:26.098: INFO: Pod "pod-subpath-test-preprovisionedpv-fl8f" satisfied condition "Succeeded or Failed"
Sep 12 13:36:26.210: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fl8f container test-container-subpath-preprovisionedpv-fl8f: <nil>
STEP: delete the pod
Sep 12 13:36:26.476: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fl8f to disappear
Sep 12 13:36:26.633: INFO: Pod pod-subpath-test-preprovisionedpv-fl8f no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fl8f
Sep 12 13:36:26.633: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fl8f" in namespace "provisioning-7512"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:28.658: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 68 lines ...
Sep 12 13:36:30.110: INFO: AfterEach: Cleaning up test resources.
Sep 12 13:36:30.110: INFO: pvc is nil
Sep 12 13:36:30.110: INFO: Deleting PersistentVolume "hostpath-vrlxc"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":3,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
... skipping 52 lines ...
STEP: Deleting pod aws-client in namespace volume-2146
Sep 12 13:36:16.670: INFO: Waiting for pod aws-client to disappear
Sep 12 13:36:16.780: INFO: Pod aws-client still exists
Sep 12 13:36:18.781: INFO: Waiting for pod aws-client to disappear
Sep 12 13:36:18.890: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Sep 12 13:36:19.099: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0ba7bcf37dc8f7f9b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ba7bcf37dc8f7f9b is currently attached to i-0645ee0b79d982420
	status code: 400, request id: 17eb374a-b4ad-4cfe-892d-b1b56f4dded7
Sep 12 13:36:24.712: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0ba7bcf37dc8f7f9b", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0ba7bcf37dc8f7f9b is currently attached to i-0645ee0b79d982420
	status code: 400, request id: 608bbd53-c241-4c51-a431-0aa6b5aa4a6b
Sep 12 13:36:30.305: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0ba7bcf37dc8f7f9b".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:30.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-2146" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":2,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:30.601: INFO: Only supported for providers [vsphere] (not aws)
... skipping 118 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 27 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282
Sep 12 13:36:29.024: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-4ceaf1fa-f3af-47be-b68c-91c094653944" in namespace "security-context-test-5054" to be "Succeeded or Failed"
Sep 12 13:36:29.131: INFO: Pod "busybox-privileged-true-4ceaf1fa-f3af-47be-b68c-91c094653944": Phase="Pending", Reason="", readiness=false. Elapsed: 106.764005ms
Sep 12 13:36:31.241: INFO: Pod "busybox-privileged-true-4ceaf1fa-f3af-47be-b68c-91c094653944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.217057846s
Sep 12 13:36:31.241: INFO: Pod "busybox-privileged-true-4ceaf1fa-f3af-47be-b68c-91c094653944" satisfied condition "Succeeded or Failed"
Sep 12 13:36:31.351: INFO: Got logs for pod "busybox-privileged-true-4ceaf1fa-f3af-47be-b68c-91c094653944": ""
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:31.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5054" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":12,"skipped":89,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:31.607: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 113 lines ...
Sep 12 13:36:26.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 12 13:36:26.833: INFO: Waiting up to 5m0s for pod "pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc" in namespace "emptydir-1124" to be "Succeeded or Failed"
Sep 12 13:36:27.020: INFO: Pod "pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc": Phase="Pending", Reason="", readiness=false. Elapsed: 187.121294ms
Sep 12 13:36:29.130: INFO: Pod "pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296768084s
Sep 12 13:36:31.240: INFO: Pod "pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.406377314s
STEP: Saw pod success
Sep 12 13:36:31.240: INFO: Pod "pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc" satisfied condition "Succeeded or Failed"
Sep 12 13:36:31.349: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc container test-container: <nil>
STEP: delete the pod
Sep 12 13:36:31.619: INFO: Waiting for pod pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc to disappear
Sep 12 13:36:31.728: INFO: Pod pod-3cc3c161-b5a7-4211-a74c-a18c9784f4bc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.889 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":47,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:31.980: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 28 lines ...
Sep 12 13:36:30.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 12 13:36:30.910: INFO: Waiting up to 5m0s for pod "pod-5ae99c56-52d5-49df-87f4-5f306b66ed01" in namespace "emptydir-37" to be "Succeeded or Failed"
Sep 12 13:36:31.018: INFO: Pod "pod-5ae99c56-52d5-49df-87f4-5f306b66ed01": Phase="Pending", Reason="", readiness=false. Elapsed: 108.424304ms
Sep 12 13:36:33.127: INFO: Pod "pod-5ae99c56-52d5-49df-87f4-5f306b66ed01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216929099s
Sep 12 13:36:35.239: INFO: Pod "pod-5ae99c56-52d5-49df-87f4-5f306b66ed01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329751198s
STEP: Saw pod success
Sep 12 13:36:35.240: INFO: Pod "pod-5ae99c56-52d5-49df-87f4-5f306b66ed01" satisfied condition "Succeeded or Failed"
Sep 12 13:36:35.348: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-5ae99c56-52d5-49df-87f4-5f306b66ed01 container test-container: <nil>
STEP: delete the pod
Sep 12 13:36:35.570: INFO: Waiting for pod pod-5ae99c56-52d5-49df-87f4-5f306b66ed01 to disappear
Sep 12 13:36:35.681: INFO: Pod pod-5ae99c56-52d5-49df-87f4-5f306b66ed01 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.655 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:34.601 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:915
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:37.045: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":11,"skipped":120,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":6,"skipped":92,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:38.595: INFO: Only supported for providers [gce gke] (not aws)
... skipping 25 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:38.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":7,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":1,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:40.499: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
      Driver emptydir doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":39,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:26.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:13.873 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":5,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:19.926 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":5,"skipped":38,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:35:53.640: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Sep 12 13:36:10.147: INFO: PersistentVolumeClaim pvc-62v5r found but phase is Pending instead of Bound.
Sep 12 13:36:12.258: INFO: PersistentVolumeClaim pvc-62v5r found and phase=Bound (10.666913974s)
Sep 12 13:36:12.258: INFO: Waiting up to 3m0s for PersistentVolume local-l4s6v to have phase Bound
Sep 12 13:36:12.367: INFO: PersistentVolume local-l4s6v found and phase=Bound (108.799191ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6hq9
STEP: Creating a pod to test atomic-volume-subpath
Sep 12 13:36:12.694: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6hq9" in namespace "provisioning-2710" to be "Succeeded or Failed"
Sep 12 13:36:12.803: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Pending", Reason="", readiness=false. Elapsed: 108.357233ms
Sep 12 13:36:14.912: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217925577s
Sep 12 13:36:17.021: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326881783s
Sep 12 13:36:19.130: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436084588s
Sep 12 13:36:21.291: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Running", Reason="", readiness=true. Elapsed: 8.596545584s
Sep 12 13:36:23.402: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Running", Reason="", readiness=true. Elapsed: 10.707297117s
... skipping 3 lines ...
Sep 12 13:36:31.934: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Running", Reason="", readiness=true. Elapsed: 19.239723807s
Sep 12 13:36:34.043: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Running", Reason="", readiness=true. Elapsed: 21.348329035s
Sep 12 13:36:36.151: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Running", Reason="", readiness=true. Elapsed: 23.45707015s
Sep 12 13:36:38.315: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Running", Reason="", readiness=true. Elapsed: 25.620336695s
Sep 12 13:36:40.425: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.730900701s
STEP: Saw pod success
Sep 12 13:36:40.425: INFO: Pod "pod-subpath-test-preprovisionedpv-6hq9" satisfied condition "Succeeded or Failed"
Sep 12 13:36:40.536: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-6hq9 container test-container-subpath-preprovisionedpv-6hq9: <nil>
STEP: delete the pod
Sep 12 13:36:40.767: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6hq9 to disappear
Sep 12 13:36:40.876: INFO: Pod pod-subpath-test-preprovisionedpv-6hq9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6hq9
Sep 12 13:36:40.876: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6hq9" in namespace "provisioning-2710"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":17,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:42.482: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 108 lines ...
Sep 12 13:36:25.945: INFO: PersistentVolumeClaim pvc-9nhpw found but phase is Pending instead of Bound.
Sep 12 13:36:28.055: INFO: PersistentVolumeClaim pvc-9nhpw found and phase=Bound (4.334753277s)
Sep 12 13:36:28.055: INFO: Waiting up to 3m0s for PersistentVolume local-jhhsl to have phase Bound
Sep 12 13:36:28.167: INFO: PersistentVolume local-jhhsl found and phase=Bound (111.81128ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qv2c
STEP: Creating a pod to test subpath
Sep 12 13:36:28.518: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qv2c" in namespace "provisioning-8218" to be "Succeeded or Failed"
Sep 12 13:36:28.627: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c": Phase="Pending", Reason="", readiness=false. Elapsed: 108.87126ms
Sep 12 13:36:30.736: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217711678s
Sep 12 13:36:32.846: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327948687s
Sep 12 13:36:34.956: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437861208s
Sep 12 13:36:37.068: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550154868s
Sep 12 13:36:39.177: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659218063s
Sep 12 13:36:41.288: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.769259043s
STEP: Saw pod success
Sep 12 13:36:41.288: INFO: Pod "pod-subpath-test-preprovisionedpv-qv2c" satisfied condition "Succeeded or Failed"
Sep 12 13:36:41.396: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-qv2c container test-container-subpath-preprovisionedpv-qv2c: <nil>
STEP: delete the pod
Sep 12 13:36:41.631: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qv2c to disappear
Sep 12 13:36:41.741: INFO: Pod pod-subpath-test-preprovisionedpv-qv2c no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qv2c
Sep 12 13:36:41.741: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qv2c" in namespace "provisioning-8218"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:70.078 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
• [SLOW TEST:65.239 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":5,"skipped":66,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:44.220: INFO: Only supported for providers [gce gke] (not aws)
... skipping 155 lines ...
• [SLOW TEST:13.909 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:44.662: INFO: Only supported for providers [azure] (not aws)
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:45.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-4408" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":6,"skipped":45,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":2,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:45.940: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
Sep 12 13:36:05.160: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-7349v8l4f
STEP: creating a claim
Sep 12 13:36:05.270: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-nfvw
STEP: Creating a pod to test subpath
Sep 12 13:36:05.606: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-nfvw" in namespace "provisioning-7349" to be "Succeeded or Failed"
Sep 12 13:36:05.716: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 109.746663ms
Sep 12 13:36:07.827: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220639126s
Sep 12 13:36:09.936: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330337779s
Sep 12 13:36:12.047: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441269206s
Sep 12 13:36:14.158: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551823308s
Sep 12 13:36:16.268: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662328127s
... skipping 2 lines ...
Sep 12 13:36:22.658: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 17.051739617s
Sep 12 13:36:24.775: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.168571724s
Sep 12 13:36:26.945: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 21.338555749s
Sep 12 13:36:29.058: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.45165139s
Sep 12 13:36:31.176: INFO: Pod "pod-subpath-test-dynamicpv-nfvw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.569444985s
STEP: Saw pod success
Sep 12 13:36:31.176: INFO: Pod "pod-subpath-test-dynamicpv-nfvw" satisfied condition "Succeeded or Failed"
Sep 12 13:36:31.285: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-nfvw container test-container-subpath-dynamicpv-nfvw: <nil>
STEP: delete the pod
Sep 12 13:36:31.547: INFO: Waiting for pod pod-subpath-test-dynamicpv-nfvw to disappear
Sep 12 13:36:31.656: INFO: Pod pod-subpath-test-dynamicpv-nfvw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-nfvw
Sep 12 13:36:31.656: INFO: Deleting pod "pod-subpath-test-dynamicpv-nfvw" in namespace "provisioning-7349"
... skipping 47 lines ...
• [SLOW TEST:38.203 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:49.178: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 222 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":71,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:50.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6914" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":6,"skipped":58,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:14.974 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":5,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:50.907: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-9ded2baf-0ae1-43e8-80b7-dcc2c1284116
STEP: Creating a pod to test consume secrets
Sep 12 13:36:41.741: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043" in namespace "projected-4798" to be "Succeeded or Failed"
Sep 12 13:36:41.850: INFO: Pod "pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043": Phase="Pending", Reason="", readiness=false. Elapsed: 108.905638ms
Sep 12 13:36:43.959: INFO: Pod "pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217582952s
Sep 12 13:36:46.068: INFO: Pod "pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327096605s
Sep 12 13:36:48.178: INFO: Pod "pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436839917s
Sep 12 13:36:50.290: INFO: Pod "pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.549092006s
STEP: Saw pod success
Sep 12 13:36:50.290: INFO: Pod "pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043" satisfied condition "Succeeded or Failed"
Sep 12 13:36:50.408: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 12 13:36:50.656: INFO: Waiting for pod pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043 to disappear
Sep 12 13:36:50.764: INFO: Pod pod-projected-secrets-ddec9f1e-8d79-41d4-95ab-fa19b3599043 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:10.576 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:51.128: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 105 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:51.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9451" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":40,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:52.061: INFO: Driver local doesn't support ext4 -- skipping
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      running a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:512
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:54.932: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 109 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-be1a9375-fc3d-4124-85ff-3ae8b4520b17
STEP: Creating a pod to test consume secrets
Sep 12 13:36:45.904: INFO: Waiting up to 5m0s for pod "pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee" in namespace "secrets-3970" to be "Succeeded or Failed"
Sep 12 13:36:46.013: INFO: Pod "pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee": Phase="Pending", Reason="", readiness=false. Elapsed: 109.108225ms
Sep 12 13:36:48.123: INFO: Pod "pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219038106s
Sep 12 13:36:50.236: INFO: Pod "pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331952354s
Sep 12 13:36:52.347: INFO: Pod "pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442851163s
Sep 12 13:36:54.459: INFO: Pod "pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.554781188s
STEP: Saw pod success
Sep 12 13:36:54.459: INFO: Pod "pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee" satisfied condition "Succeeded or Failed"
Sep 12 13:36:54.570: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee container secret-volume-test: <nil>
STEP: delete the pod
Sep 12 13:36:54.796: INFO: Waiting for pod pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee to disappear
Sep 12 13:36:54.905: INFO: Pod pod-secrets-9d07a51c-a9e7-444c-9fb2-ded61dc67bee no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 15 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Sep 12 13:36:51.240: INFO: Waiting up to 5m0s for pod "busybox-user-0-9e1ef6b7-f79b-4117-b746-890466a49efc" in namespace "security-context-test-1289" to be "Succeeded or Failed"
Sep 12 13:36:51.350: INFO: Pod "busybox-user-0-9e1ef6b7-f79b-4117-b746-890466a49efc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.671341ms
Sep 12 13:36:53.486: INFO: Pod "busybox-user-0-9e1ef6b7-f79b-4117-b746-890466a49efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24608792s
Sep 12 13:36:55.597: INFO: Pod "busybox-user-0-9e1ef6b7-f79b-4117-b746-890466a49efc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356881577s
Sep 12 13:36:57.707: INFO: Pod "busybox-user-0-9e1ef6b7-f79b-4117-b746-890466a49efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.466900696s
Sep 12 13:36:57.707: INFO: Pod "busybox-user-0-9e1ef6b7-f79b-4117-b746-890466a49efc" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:57.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1289" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":73,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:57.937: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 39 lines ...
• [SLOW TEST:7.320 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":60,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:36:58.141: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:15.460 seconds]
[sig-node] Sysctls [LinuxOnly] [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:36:59.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4724" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":10,"skipped":76,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:473
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":12,"skipped":124,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "apply-1170" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":8,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:00.066: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:01.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-3485" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":13,"skipped":126,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:01.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":14,"skipped":130,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:52.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Sep 12 13:36:52.734: INFO: Waiting up to 5m0s for pod "downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908" in namespace "downward-api-177" to be "Succeeded or Failed"
Sep 12 13:36:52.842: INFO: Pod "downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908": Phase="Pending", Reason="", readiness=false. Elapsed: 108.410826ms
Sep 12 13:36:54.951: INFO: Pod "downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217144223s
Sep 12 13:36:57.071: INFO: Pod "downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337082436s
Sep 12 13:36:59.180: INFO: Pod "downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446018665s
Sep 12 13:37:01.307: INFO: Pod "downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.573335955s
STEP: Saw pod success
Sep 12 13:37:01.307: INFO: Pod "downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908" satisfied condition "Succeeded or Failed"
Sep 12 13:37:01.450: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908 container dapi-container: <nil>
STEP: delete the pod
Sep 12 13:37:01.685: INFO: Waiting for pod downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908 to disappear
Sep 12 13:37:01.793: INFO: Pod downward-api-cb50aca8-b539-4482-a3bd-e0b1fd0e0908 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.938 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":7,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:997
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:998
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":3,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:05.963: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 78 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":4,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:06.728: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-6832f6f5-a1c3-4928-8e29-5a053c373cc6
STEP: Creating a pod to test consume configMaps
Sep 12 13:36:55.767: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43" in namespace "configmap-9202" to be "Succeeded or Failed"
Sep 12 13:36:55.877: INFO: Pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43": Phase="Pending", Reason="", readiness=false. Elapsed: 109.896366ms
Sep 12 13:36:57.985: INFO: Pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218524936s
Sep 12 13:37:00.094: INFO: Pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327697621s
Sep 12 13:37:02.204: INFO: Pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43763339s
Sep 12 13:37:04.316: INFO: Pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54929194s
Sep 12 13:37:06.426: INFO: Pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.659012379s
STEP: Saw pod success
Sep 12 13:37:06.426: INFO: Pod "pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43" satisfied condition "Succeeded or Failed"
Sep 12 13:37:06.539: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43 container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:37:06.785: INFO: Waiting for pod pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43 to disappear
Sep 12 13:37:06.915: INFO: Pod pod-configmaps-c6607c98-dcd0-4c70-87d7-1433c61add43 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.150 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:8.261 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support memory backed volumes of specified size
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:298
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":11,"skipped":80,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:07.542: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 141 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":85,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:09.060: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 47 lines ...
Sep 12 13:36:55.222: INFO: PersistentVolumeClaim pvc-82l5b found but phase is Pending instead of Bound.
Sep 12 13:36:57.351: INFO: PersistentVolumeClaim pvc-82l5b found and phase=Bound (10.675978197s)
Sep 12 13:36:57.351: INFO: Waiting up to 3m0s for PersistentVolume local-2zmzb to have phase Bound
Sep 12 13:36:57.460: INFO: PersistentVolume local-2zmzb found and phase=Bound (108.844502ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-bd4f
STEP: Creating a pod to test exec-volume-test
Sep 12 13:36:57.788: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-bd4f" in namespace "volume-4746" to be "Succeeded or Failed"
Sep 12 13:36:57.897: INFO: Pod "exec-volume-test-preprovisionedpv-bd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 108.81707ms
Sep 12 13:37:00.008: INFO: Pod "exec-volume-test-preprovisionedpv-bd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219470952s
Sep 12 13:37:02.117: INFO: Pod "exec-volume-test-preprovisionedpv-bd4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328772134s
Sep 12 13:37:04.228: INFO: Pod "exec-volume-test-preprovisionedpv-bd4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.43940321s
STEP: Saw pod success
Sep 12 13:37:04.228: INFO: Pod "exec-volume-test-preprovisionedpv-bd4f" satisfied condition "Succeeded or Failed"
Sep 12 13:37:04.342: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-bd4f container exec-container-preprovisionedpv-bd4f: <nil>
STEP: delete the pod
Sep 12 13:37:04.636: INFO: Waiting for pod exec-volume-test-preprovisionedpv-bd4f to disappear
Sep 12 13:37:04.748: INFO: Pod exec-volume-test-preprovisionedpv-bd4f no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-bd4f
Sep 12 13:37:04.748: INFO: Deleting pod "exec-volume-test-preprovisionedpv-bd4f" in namespace "volume-4746"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":97,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Sep 12 13:36:45.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
Sep 12 13:36:46.524: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 12 13:36:46.748: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5420" in namespace "provisioning-5420" to be "Succeeded or Failed"
Sep 12 13:36:46.858: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Pending", Reason="", readiness=false. Elapsed: 109.952894ms
Sep 12 13:36:48.970: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221239959s
Sep 12 13:36:51.085: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337213888s
Sep 12 13:36:53.196: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.448048778s
STEP: Saw pod success
Sep 12 13:36:53.196: INFO: Pod "hostpath-symlink-prep-provisioning-5420" satisfied condition "Succeeded or Failed"
Sep 12 13:36:53.196: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5420" in namespace "provisioning-5420"
Sep 12 13:36:53.316: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5420" to be fully deleted
Sep 12 13:36:53.442: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tmj6
STEP: Creating a pod to test subpath
Sep 12 13:36:53.561: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-tmj6" in namespace "provisioning-5420" to be "Succeeded or Failed"
Sep 12 13:36:53.672: INFO: Pod "pod-subpath-test-inlinevolume-tmj6": Phase="Pending", Reason="", readiness=false. Elapsed: 111.161943ms
Sep 12 13:36:55.791: INFO: Pod "pod-subpath-test-inlinevolume-tmj6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22954706s
Sep 12 13:36:57.901: INFO: Pod "pod-subpath-test-inlinevolume-tmj6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339935693s
Sep 12 13:37:00.011: INFO: Pod "pod-subpath-test-inlinevolume-tmj6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449719108s
Sep 12 13:37:02.127: INFO: Pod "pod-subpath-test-inlinevolume-tmj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.566027445s
STEP: Saw pod success
Sep 12 13:37:02.127: INFO: Pod "pod-subpath-test-inlinevolume-tmj6" satisfied condition "Succeeded or Failed"
Sep 12 13:37:02.237: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-tmj6 container test-container-subpath-inlinevolume-tmj6: <nil>
STEP: delete the pod
Sep 12 13:37:02.477: INFO: Waiting for pod pod-subpath-test-inlinevolume-tmj6 to disappear
Sep 12 13:37:02.587: INFO: Pod pod-subpath-test-inlinevolume-tmj6 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-tmj6
Sep 12 13:37:02.587: INFO: Deleting pod "pod-subpath-test-inlinevolume-tmj6" in namespace "provisioning-5420"
STEP: Deleting pod
Sep 12 13:37:02.696: INFO: Deleting pod "pod-subpath-test-inlinevolume-tmj6" in namespace "provisioning-5420"
Sep 12 13:37:02.922: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5420" in namespace "provisioning-5420" to be "Succeeded or Failed"
Sep 12 13:37:03.032: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Pending", Reason="", readiness=false. Elapsed: 109.690184ms
Sep 12 13:37:05.142: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219679301s
Sep 12 13:37:07.253: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330820928s
Sep 12 13:37:09.363: INFO: Pod "hostpath-symlink-prep-provisioning-5420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441069587s
STEP: Saw pod success
Sep 12 13:37:09.363: INFO: Pod "hostpath-symlink-prep-provisioning-5420" satisfied condition "Succeeded or Failed"
Sep 12 13:37:09.363: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5420" in namespace "provisioning-5420"
Sep 12 13:37:09.486: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5420" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:09.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5420" for this suite.
... skipping 139 lines ...
Sep 12 13:37:07.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 12 13:37:08.241: INFO: Waiting up to 5m0s for pod "pod-c94244cc-9384-4f8f-a557-c029cdcc691c" in namespace "emptydir-492" to be "Succeeded or Failed"
Sep 12 13:37:08.351: INFO: Pod "pod-c94244cc-9384-4f8f-a557-c029cdcc691c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.158086ms
Sep 12 13:37:10.471: INFO: Pod "pod-c94244cc-9384-4f8f-a557-c029cdcc691c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.229862325s
STEP: Saw pod success
Sep 12 13:37:10.471: INFO: Pod "pod-c94244cc-9384-4f8f-a557-c029cdcc691c" satisfied condition "Succeeded or Failed"
Sep 12 13:37:10.586: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-c94244cc-9384-4f8f-a557-c029cdcc691c container test-container: <nil>
STEP: delete the pod
Sep 12 13:37:10.817: INFO: Waiting for pod pod-c94244cc-9384-4f8f-a557-c029cdcc691c to disappear
Sep 12 13:37:10.926: INFO: Pod pod-c94244cc-9384-4f8f-a557-c029cdcc691c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:10.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-492" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:11.171: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 58 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:55.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 28 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:11.606: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
Sep 12 13:36:54.960: INFO: PersistentVolumeClaim pvc-zjjjn found but phase is Pending instead of Bound.
Sep 12 13:36:57.073: INFO: PersistentVolumeClaim pvc-zjjjn found and phase=Bound (10.652340055s)
Sep 12 13:36:57.073: INFO: Waiting up to 3m0s for PersistentVolume local-4qq7k to have phase Bound
Sep 12 13:36:57.181: INFO: PersistentVolume local-4qq7k found and phase=Bound (107.820775ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zm6b
STEP: Creating a pod to test subpath
Sep 12 13:36:57.508: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zm6b" in namespace "provisioning-3969" to be "Succeeded or Failed"
Sep 12 13:36:57.616: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b": Phase="Pending", Reason="", readiness=false. Elapsed: 107.198525ms
Sep 12 13:36:59.724: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215748103s
Sep 12 13:37:01.833: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323939046s
Sep 12 13:37:03.941: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432009939s
Sep 12 13:37:06.049: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.539959213s
Sep 12 13:37:08.156: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.647911441s
Sep 12 13:37:10.267: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.758779135s
STEP: Saw pod success
Sep 12 13:37:10.267: INFO: Pod "pod-subpath-test-preprovisionedpv-zm6b" satisfied condition "Succeeded or Failed"
Sep 12 13:37:10.375: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-zm6b container test-container-subpath-preprovisionedpv-zm6b: <nil>
STEP: delete the pod
Sep 12 13:37:10.609: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zm6b to disappear
Sep 12 13:37:10.716: INFO: Pod pod-subpath-test-preprovisionedpv-zm6b no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zm6b
Sep 12 13:37:10.716: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zm6b" in namespace "provisioning-3969"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Sep 12 13:37:10.573: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5573" to be "Succeeded or Failed"
Sep 12 13:37:10.682: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 108.430212ms
Sep 12 13:37:12.791: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217785582s
Sep 12 13:37:14.901: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327566533s
STEP: Saw pod success
Sep 12 13:37:14.901: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 12 13:37:15.012: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Sep 12 13:37:15.259: INFO: Waiting for pod pod-host-path-test to disappear
Sep 12 13:37:15.368: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.695 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":107,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:15.619: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 185 lines ...
• [SLOW TEST:44.142 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":13,"skipped":102,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":9,"skipped":85,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:37:10.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-68398af6-2f76-47d1-b0af-801e069a3b74
STEP: Creating a pod to test consume configMaps
Sep 12 13:37:11.150: INFO: Waiting up to 5m0s for pod "pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed" in namespace "configmap-4470" to be "Succeeded or Failed"
Sep 12 13:37:11.262: INFO: Pod "pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 111.45874ms
Sep 12 13:37:13.371: INFO: Pod "pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221146878s
Sep 12 13:37:15.492: INFO: Pod "pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.34204461s
STEP: Saw pod success
Sep 12 13:37:15.492: INFO: Pod "pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed" satisfied condition "Succeeded or Failed"
Sep 12 13:37:15.604: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:37:15.842: INFO: Waiting for pod pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed to disappear
Sep 12 13:37:15.953: INFO: Pod pod-configmaps-ecf2f33b-3384-4a0e-ad3b-d9584e3ad9ed no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.826 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":85,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:16.196: INFO: Only supported for providers [openstack] (not aws)
... skipping 147 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:18.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-361" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":7,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:26.814: INFO: >>> kubeConfig: /root/.kube/config
... skipping 59 lines ...
Sep 12 13:36:37.750: INFO: PersistentVolumeClaim csi-hostpathrsg2t found but phase is Pending instead of Bound.
Sep 12 13:36:39.859: INFO: PersistentVolumeClaim csi-hostpathrsg2t found but phase is Pending instead of Bound.
Sep 12 13:36:41.969: INFO: PersistentVolumeClaim csi-hostpathrsg2t found but phase is Pending instead of Bound.
Sep 12 13:36:44.079: INFO: PersistentVolumeClaim csi-hostpathrsg2t found and phase=Bound (10.65858768s)
STEP: Creating pod pod-subpath-test-dynamicpv-ksm6
STEP: Creating a pod to test subpath
Sep 12 13:36:44.413: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ksm6" in namespace "provisioning-1385" to be "Succeeded or Failed"
Sep 12 13:36:44.522: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 109.05343ms
Sep 12 13:36:46.633: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219608797s
Sep 12 13:36:48.743: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32939134s
Sep 12 13:36:50.853: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4392652s
Sep 12 13:36:52.964: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550504761s
Sep 12 13:36:55.074: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.660431476s
Sep 12 13:36:57.185: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.771372833s
Sep 12 13:36:59.296: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.882364355s
Sep 12 13:37:01.431: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.017572794s
Sep 12 13:37:03.541: INFO: Pod "pod-subpath-test-dynamicpv-ksm6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.127595573s
STEP: Saw pod success
Sep 12 13:37:03.541: INFO: Pod "pod-subpath-test-dynamicpv-ksm6" satisfied condition "Succeeded or Failed"
Sep 12 13:37:03.653: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-ksm6 container test-container-subpath-dynamicpv-ksm6: <nil>
STEP: delete the pod
Sep 12 13:37:03.892: INFO: Waiting for pod pod-subpath-test-dynamicpv-ksm6 to disappear
Sep 12 13:37:04.002: INFO: Pod pod-subpath-test-dynamicpv-ksm6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ksm6
Sep 12 13:37:04.002: INFO: Deleting pod "pod-subpath-test-dynamicpv-ksm6" in namespace "provisioning-1385"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:21.934: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":47,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:23.190: INFO: Only supported for providers [azure] (not aws)
... skipping 23 lines ...
Sep 12 13:37:15.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 12 13:37:16.362: INFO: Waiting up to 5m0s for pod "pod-1f194423-cb42-4a78-ae93-cdf117cc5245" in namespace "emptydir-9453" to be "Succeeded or Failed"
Sep 12 13:37:16.471: INFO: Pod "pod-1f194423-cb42-4a78-ae93-cdf117cc5245": Phase="Pending", Reason="", readiness=false. Elapsed: 108.497499ms
Sep 12 13:37:18.580: INFO: Pod "pod-1f194423-cb42-4a78-ae93-cdf117cc5245": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217685167s
Sep 12 13:37:20.690: INFO: Pod "pod-1f194423-cb42-4a78-ae93-cdf117cc5245": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32739126s
Sep 12 13:37:22.800: INFO: Pod "pod-1f194423-cb42-4a78-ae93-cdf117cc5245": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437263537s
STEP: Saw pod success
Sep 12 13:37:22.800: INFO: Pod "pod-1f194423-cb42-4a78-ae93-cdf117cc5245" satisfied condition "Succeeded or Failed"
Sep 12 13:37:22.908: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-1f194423-cb42-4a78-ae93-cdf117cc5245 container test-container: <nil>
STEP: delete the pod
Sep 12 13:37:23.135: INFO: Waiting for pod pod-1f194423-cb42-4a78-ae93-cdf117cc5245 to disappear
Sep 12 13:37:23.243: INFO: Pod pod-1f194423-cb42-4a78-ae93-cdf117cc5245 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.765 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":123,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:23.486: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 59 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Sep 12 13:37:16.385: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 12 13:37:16.385: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jqwh
STEP: Creating a pod to test subpath
Sep 12 13:37:16.495: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jqwh" in namespace "provisioning-2764" to be "Succeeded or Failed"
Sep 12 13:37:16.602: INFO: Pod "pod-subpath-test-inlinevolume-jqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 106.8777ms
Sep 12 13:37:18.710: INFO: Pod "pod-subpath-test-inlinevolume-jqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21499303s
Sep 12 13:37:20.830: INFO: Pod "pod-subpath-test-inlinevolume-jqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335445477s
Sep 12 13:37:22.939: INFO: Pod "pod-subpath-test-inlinevolume-jqwh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444438288s
Sep 12 13:37:25.103: INFO: Pod "pod-subpath-test-inlinevolume-jqwh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.608335191s
STEP: Saw pod success
Sep 12 13:37:25.103: INFO: Pod "pod-subpath-test-inlinevolume-jqwh" satisfied condition "Succeeded or Failed"
Sep 12 13:37:25.211: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-jqwh container test-container-subpath-inlinevolume-jqwh: <nil>
STEP: delete the pod
Sep 12 13:37:25.461: INFO: Waiting for pod pod-subpath-test-inlinevolume-jqwh to disappear
Sep 12 13:37:25.590: INFO: Pod pod-subpath-test-inlinevolume-jqwh no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jqwh
Sep 12 13:37:25.590: INFO: Deleting pod "pod-subpath-test-inlinevolume-jqwh" in namespace "provisioning-2764"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":14,"skipped":108,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:26.183: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:50.417: INFO: >>> kubeConfig: /root/.kube/config
... skipping 83 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":59,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:37:26.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-3892/secret-test-81a28dce-fddd-4134-898a-2245937d3ac2
STEP: Creating a pod to test consume secrets
Sep 12 13:37:26.962: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435" in namespace "secrets-3892" to be "Succeeded or Failed"
Sep 12 13:37:27.069: INFO: Pod "pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435": Phase="Pending", Reason="", readiness=false. Elapsed: 107.269247ms
Sep 12 13:37:29.178: INFO: Pod "pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215896486s
Sep 12 13:37:31.288: INFO: Pod "pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326384445s
Sep 12 13:37:33.396: INFO: Pod "pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.434030599s
STEP: Saw pod success
Sep 12 13:37:33.396: INFO: Pod "pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435" satisfied condition "Succeeded or Failed"
Sep 12 13:37:33.506: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435 container env-test: <nil>
STEP: delete the pod
Sep 12 13:37:33.836: INFO: Waiting for pod pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435 to disappear
Sep 12 13:37:33.943: INFO: Pod pod-configmaps-1b7d0bc2-918e-4324-abe7-1a473116a435 no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.960 seconds]
[sig-node] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":110,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 63 lines ...
Sep 12 13:36:34.216: INFO: PersistentVolumeClaim csi-hostpathjbll7 found but phase is Pending instead of Bound.
Sep 12 13:36:36.331: INFO: PersistentVolumeClaim csi-hostpathjbll7 found but phase is Pending instead of Bound.
Sep 12 13:36:38.442: INFO: PersistentVolumeClaim csi-hostpathjbll7 found but phase is Pending instead of Bound.
Sep 12 13:36:40.552: INFO: PersistentVolumeClaim csi-hostpathjbll7 found and phase=Bound (12.838648736s)
STEP: Creating pod pod-subpath-test-dynamicpv-q2dw
STEP: Creating a pod to test subpath
Sep 12 13:36:40.888: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-q2dw" in namespace "provisioning-5448" to be "Succeeded or Failed"
Sep 12 13:36:40.998: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 109.95471ms
Sep 12 13:36:43.112: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223850869s
Sep 12 13:36:45.224: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335998079s
Sep 12 13:36:47.334: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446003325s
Sep 12 13:36:49.445: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556500046s
Sep 12 13:36:51.556: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.668041255s
... skipping 4 lines ...
Sep 12 13:37:02.110: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 21.221864035s
Sep 12 13:37:04.227: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.338342525s
Sep 12 13:37:06.337: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 25.449036039s
Sep 12 13:37:08.449: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Pending", Reason="", readiness=false. Elapsed: 27.560216566s
Sep 12 13:37:10.559: INFO: Pod "pod-subpath-test-dynamicpv-q2dw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.670958344s
STEP: Saw pod success
Sep 12 13:37:10.559: INFO: Pod "pod-subpath-test-dynamicpv-q2dw" satisfied condition "Succeeded or Failed"
Sep 12 13:37:10.669: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-q2dw container test-container-subpath-dynamicpv-q2dw: <nil>
STEP: delete the pod
Sep 12 13:37:10.907: INFO: Waiting for pod pod-subpath-test-dynamicpv-q2dw to disappear
Sep 12 13:37:11.034: INFO: Pod pod-subpath-test-dynamicpv-q2dw no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-q2dw
Sep 12 13:37:11.034: INFO: Deleting pod "pod-subpath-test-dynamicpv-q2dw" in namespace "provisioning-5448"
... skipping 63 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:34.192: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 51 lines ...
Sep 12 13:37:24.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 12 13:37:24.962: INFO: Waiting up to 5m0s for pod "pod-74d88e63-20f3-4206-b310-10c8aa9f2978" in namespace "emptydir-1404" to be "Succeeded or Failed"
Sep 12 13:37:25.101: INFO: Pod "pod-74d88e63-20f3-4206-b310-10c8aa9f2978": Phase="Pending", Reason="", readiness=false. Elapsed: 138.159451ms
Sep 12 13:37:27.210: INFO: Pod "pod-74d88e63-20f3-4206-b310-10c8aa9f2978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247918668s
Sep 12 13:37:29.323: INFO: Pod "pod-74d88e63-20f3-4206-b310-10c8aa9f2978": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360291437s
Sep 12 13:37:31.432: INFO: Pod "pod-74d88e63-20f3-4206-b310-10c8aa9f2978": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469399139s
Sep 12 13:37:33.547: INFO: Pod "pod-74d88e63-20f3-4206-b310-10c8aa9f2978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.584423228s
STEP: Saw pod success
Sep 12 13:37:33.547: INFO: Pod "pod-74d88e63-20f3-4206-b310-10c8aa9f2978" satisfied condition "Succeeded or Failed"
Sep 12 13:37:33.681: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-74d88e63-20f3-4206-b310-10c8aa9f2978 container test-container: <nil>
STEP: delete the pod
Sep 12 13:37:33.914: INFO: Waiting for pod pod-74d88e63-20f3-4206-b310-10c8aa9f2978 to disappear
Sep 12 13:37:34.022: INFO: Pod pod-74d88e63-20f3-4206-b310-10c8aa9f2978 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":130,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:34.248: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:35.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-585" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":10,"skipped":144,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:36.089: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 146 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1256
    CSIStorageCapacity disabled
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1299
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:37:09.829: INFO: >>> kubeConfig: /root/.kube/config
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":4,"skipped":35,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:32.524 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":5,"skipped":33,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:55.370 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:239
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":7,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:42.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4406" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":8,"skipped":50,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:34.337 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":9,"skipped":99,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
Sep 12 13:37:23.494: INFO: PersistentVolumeClaim pvc-bl46g found and phase=Bound (110.636049ms)
Sep 12 13:37:23.494: INFO: Waiting up to 3m0s for PersistentVolume nfs-mhd22 to have phase Bound
Sep 12 13:37:23.609: INFO: PersistentVolume nfs-mhd22 found and phase=Bound (114.916413ms)
STEP: Checking pod has write access to PersistentVolume
Sep 12 13:37:23.833: INFO: Creating nfs test pod
Sep 12 13:37:23.945: INFO: Pod should terminate with exitcode 0 (success)
Sep 12 13:37:23.945: INFO: Waiting up to 5m0s for pod "pvc-tester-tk4l6" in namespace "pv-2111" to be "Succeeded or Failed"
Sep 12 13:37:24.055: INFO: Pod "pvc-tester-tk4l6": Phase="Pending", Reason="", readiness=false. Elapsed: 109.768555ms
Sep 12 13:37:26.170: INFO: Pod "pvc-tester-tk4l6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225423352s
Sep 12 13:37:28.281: INFO: Pod "pvc-tester-tk4l6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336033082s
Sep 12 13:37:30.391: INFO: Pod "pvc-tester-tk4l6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446456575s
Sep 12 13:37:32.502: INFO: Pod "pvc-tester-tk4l6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.55732063s
STEP: Saw pod success
Sep 12 13:37:32.502: INFO: Pod "pvc-tester-tk4l6" satisfied condition "Succeeded or Failed"
Sep 12 13:37:32.502: INFO: Pod pvc-tester-tk4l6 succeeded 
Sep 12 13:37:32.502: INFO: Deleting pod "pvc-tester-tk4l6" in namespace "pv-2111"
Sep 12 13:37:32.618: INFO: Wait up to 5m0s for pod "pvc-tester-tk4l6" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Sep 12 13:37:32.728: INFO: Deleting PVC pvc-bl46g to trigger reclamation of PV 
Sep 12 13:37:32.728: INFO: Deleting PersistentVolumeClaim "pvc-bl46g"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":6,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:14.735 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":6,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:48.965: INFO: Driver "local" does not provide raw block - skipping
... skipping 104 lines ...
• [SLOW TEST:12.547 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:51.848: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1558
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1582
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":7,"skipped":77,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:37:54.883: INFO: Only supported for providers [openstack] (not aws)
... skipping 137 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Sep 12 13:37:39.382: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9" in namespace "security-context-test-536" to be "Succeeded or Failed"
Sep 12 13:37:39.491: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.229739ms
Sep 12 13:37:41.606: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224052108s
Sep 12 13:37:43.718: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336222172s
Sep 12 13:37:45.834: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451697937s
Sep 12 13:37:47.945: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562569388s
Sep 12 13:37:50.056: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.673548884s
Sep 12 13:37:52.166: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.78393429s
Sep 12 13:37:54.277: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.895331788s
Sep 12 13:37:56.388: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.00644406s
Sep 12 13:37:58.499: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.116989094s
Sep 12 13:37:58.499: INFO: Pod "alpine-nnp-true-edc38b35-97c1-436c-85f1-a09938ef3dc9" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:37:58.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-536" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":40,"failed":0}

SSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Sep 12 13:37:26.251: INFO: PersistentVolumeClaim pvc-x966t found but phase is Pending instead of Bound.
Sep 12 13:37:28.360: INFO: PersistentVolumeClaim pvc-x966t found and phase=Bound (2.217542219s)
Sep 12 13:37:28.360: INFO: Waiting up to 3m0s for PersistentVolume local-bhlhs to have phase Bound
Sep 12 13:37:28.472: INFO: PersistentVolume local-bhlhs found and phase=Bound (111.722479ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dk8p
STEP: Creating a pod to test subpath
Sep 12 13:37:28.809: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dk8p" in namespace "provisioning-607" to be "Succeeded or Failed"
Sep 12 13:37:28.918: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 108.879465ms
Sep 12 13:37:31.028: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219338329s
Sep 12 13:37:33.138: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329381915s
Sep 12 13:37:35.259: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450034967s
Sep 12 13:37:37.398: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589585885s
Sep 12 13:37:39.508: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.699138812s
Sep 12 13:37:41.617: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 12.808420041s
Sep 12 13:37:43.729: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.920495748s
STEP: Saw pod success
Sep 12 13:37:43.729: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p" satisfied condition "Succeeded or Failed"
Sep 12 13:37:43.838: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-dk8p container test-container-subpath-preprovisionedpv-dk8p: <nil>
STEP: delete the pod
Sep 12 13:37:44.066: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dk8p to disappear
Sep 12 13:37:44.175: INFO: Pod pod-subpath-test-preprovisionedpv-dk8p no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dk8p
Sep 12 13:37:44.175: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dk8p" in namespace "provisioning-607"
STEP: Creating pod pod-subpath-test-preprovisionedpv-dk8p
STEP: Creating a pod to test subpath
Sep 12 13:37:44.397: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dk8p" in namespace "provisioning-607" to be "Succeeded or Failed"
Sep 12 13:37:44.506: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 108.863817ms
Sep 12 13:37:46.616: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218456152s
Sep 12 13:37:48.729: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331118591s
Sep 12 13:37:50.839: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441955866s
Sep 12 13:37:52.949: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552059909s
Sep 12 13:37:55.060: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662138581s
Sep 12 13:37:57.169: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.77158019s
STEP: Saw pod success
Sep 12 13:37:57.169: INFO: Pod "pod-subpath-test-preprovisionedpv-dk8p" satisfied condition "Succeeded or Failed"
Sep 12 13:37:57.307: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-dk8p container test-container-subpath-preprovisionedpv-dk8p: <nil>
STEP: delete the pod
Sep 12 13:37:57.535: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dk8p to disappear
Sep 12 13:37:57.647: INFO: Pod pod-subpath-test-preprovisionedpv-dk8p no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dk8p
Sep 12 13:37:57.647: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dk8p" in namespace "provisioning-607"
... skipping 44 lines ...
• [SLOW TEST:9.030 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":7,"skipped":50,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:36:48.019: INFO: >>> kubeConfig: /root/.kube/config
... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":3,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:01.187: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 109 lines ...
• [SLOW TEST:59.290 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support orphan deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:1050
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support orphan deletion of custom resources","total":-1,"completed":8,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:02.110: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 198 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":16,"skipped":140,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":97,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:37:17.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 172 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":11,"skipped":154,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:13.269: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 102 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:37:30.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:13.846: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 204 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":11,"skipped":100,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 16 lines ...
Sep 12 13:37:56.407: INFO: PersistentVolumeClaim pvc-mw2nj found but phase is Pending instead of Bound.
Sep 12 13:37:58.517: INFO: PersistentVolumeClaim pvc-mw2nj found and phase=Bound (2.219770072s)
Sep 12 13:37:58.517: INFO: Waiting up to 3m0s for PersistentVolume local-zvm6p to have phase Bound
Sep 12 13:37:58.626: INFO: PersistentVolume local-zvm6p found and phase=Bound (108.664773ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-4msl
STEP: Creating a pod to test exec-volume-test
Sep 12 13:37:58.960: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-4msl" in namespace "volume-6381" to be "Succeeded or Failed"
Sep 12 13:37:59.069: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Pending", Reason="", readiness=false. Elapsed: 108.612301ms
Sep 12 13:38:01.180: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21969256s
Sep 12 13:38:03.290: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329811015s
Sep 12 13:38:05.400: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43975844s
Sep 12 13:38:07.511: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551364258s
Sep 12 13:38:09.640: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.680188888s
Sep 12 13:38:11.749: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Pending", Reason="", readiness=false. Elapsed: 12.788886169s
Sep 12 13:38:13.858: INFO: Pod "exec-volume-test-preprovisionedpv-4msl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.898047494s
STEP: Saw pod success
Sep 12 13:38:13.858: INFO: Pod "exec-volume-test-preprovisionedpv-4msl" satisfied condition "Succeeded or Failed"
Sep 12 13:38:13.967: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-4msl container exec-container-preprovisionedpv-4msl: <nil>
STEP: delete the pod
Sep 12 13:38:14.202: INFO: Waiting for pod exec-volume-test-preprovisionedpv-4msl to disappear
Sep 12 13:38:14.314: INFO: Pod exec-volume-test-preprovisionedpv-4msl no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-4msl
Sep 12 13:38:14.314: INFO: Deleting pod "exec-volume-test-preprovisionedpv-4msl" in namespace "volume-6381"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled","total":-1,"completed":6,"skipped":42,"failed":0}
[BeforeEach] [sig-node] kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:37:36.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 113 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":7,"skipped":42,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:19.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6033" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":49,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:19.859: INFO: Only supported for providers [gce gke] (not aws)
... skipping 119 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:20.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2577" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":10,"skipped":107,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:20.960: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":58,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:38:16.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-0ab0f083-6c65-4a20-945b-f0e036bb53eb
STEP: Creating a pod to test consume configMaps
Sep 12 13:38:17.304: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee" in namespace "projected-8359" to be "Succeeded or Failed"
Sep 12 13:38:17.413: INFO: Pod "pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee": Phase="Pending", Reason="", readiness=false. Elapsed: 108.595882ms
Sep 12 13:38:19.522: INFO: Pod "pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217691123s
Sep 12 13:38:21.632: INFO: Pod "pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327773054s
STEP: Saw pod success
Sep 12 13:38:21.632: INFO: Pod "pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee" satisfied condition "Succeeded or Failed"
Sep 12 13:38:21.741: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:38:21.975: INFO: Waiting for pod pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee to disappear
Sep 12 13:38:22.085: INFO: Pod pod-projected-configmaps-1a2bf090-04fc-4596-9116-2f39a57670ee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.789 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:22.332: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 48 lines ...
Sep 12 13:37:43.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
Sep 12 13:37:44.411: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 12 13:37:44.649: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9661" in namespace "provisioning-9661" to be "Succeeded or Failed"
Sep 12 13:37:44.760: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 110.367961ms
Sep 12 13:37:46.871: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221626428s
Sep 12 13:37:48.984: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334954426s
Sep 12 13:37:51.094: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445042924s
Sep 12 13:37:53.204: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555046005s
Sep 12 13:37:55.316: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 10.666790657s
Sep 12 13:37:57.427: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778098428s
Sep 12 13:37:59.566: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.917210305s
STEP: Saw pod success
Sep 12 13:37:59.566: INFO: Pod "hostpath-symlink-prep-provisioning-9661" satisfied condition "Succeeded or Failed"
Sep 12 13:37:59.567: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9661" in namespace "provisioning-9661"
Sep 12 13:37:59.715: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9661" to be fully deleted
Sep 12 13:37:59.838: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-ngz5
STEP: Creating a pod to test subpath
Sep 12 13:37:59.959: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-ngz5" in namespace "provisioning-9661" to be "Succeeded or Failed"
Sep 12 13:38:00.090: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Pending", Reason="", readiness=false. Elapsed: 130.310429ms
Sep 12 13:38:02.253: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293655374s
Sep 12 13:38:04.363: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.403536053s
Sep 12 13:38:06.473: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.513444906s
Sep 12 13:38:08.582: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622929887s
Sep 12 13:38:10.692: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.73264228s
Sep 12 13:38:12.810: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.850411625s
Sep 12 13:38:14.919: INFO: Pod "pod-subpath-test-inlinevolume-ngz5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.960022792s
STEP: Saw pod success
Sep 12 13:38:14.920: INFO: Pod "pod-subpath-test-inlinevolume-ngz5" satisfied condition "Succeeded or Failed"
Sep 12 13:38:15.029: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-ngz5 container test-container-subpath-inlinevolume-ngz5: <nil>
STEP: delete the pod
Sep 12 13:38:15.255: INFO: Waiting for pod pod-subpath-test-inlinevolume-ngz5 to disappear
Sep 12 13:38:15.367: INFO: Pod pod-subpath-test-inlinevolume-ngz5 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-ngz5
Sep 12 13:38:15.367: INFO: Deleting pod "pod-subpath-test-inlinevolume-ngz5" in namespace "provisioning-9661"
STEP: Deleting pod
Sep 12 13:38:15.494: INFO: Deleting pod "pod-subpath-test-inlinevolume-ngz5" in namespace "provisioning-9661"
Sep 12 13:38:15.738: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9661" in namespace "provisioning-9661" to be "Succeeded or Failed"
Sep 12 13:38:15.890: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 151.356234ms
Sep 12 13:38:18.001: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262159237s
Sep 12 13:38:20.110: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37120537s
Sep 12 13:38:22.234: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Pending", Reason="", readiness=false. Elapsed: 6.49517459s
Sep 12 13:38:24.343: INFO: Pod "hostpath-symlink-prep-provisioning-9661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.604148605s
STEP: Saw pod success
Sep 12 13:38:24.343: INFO: Pod "hostpath-symlink-prep-provisioning-9661" satisfied condition "Succeeded or Failed"
Sep 12 13:38:24.343: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9661" in namespace "provisioning-9661"
Sep 12 13:38:24.458: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9661" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:24.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9661" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:24.812: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 207 lines ...
• [SLOW TEST:11.768 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:286
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":12,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:27.779: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:33.233 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":8,"skipped":107,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:28.236: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 81 lines ...
Sep 12 13:37:24.608: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Sep 12 13:37:24.717: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathxs4h9] to have phase Bound
Sep 12 13:37:24.824: INFO: PersistentVolumeClaim csi-hostpathxs4h9 found but phase is Pending instead of Bound.
Sep 12 13:37:26.932: INFO: PersistentVolumeClaim csi-hostpathxs4h9 found and phase=Bound (2.215563273s)
STEP: Creating pod pod-subpath-test-dynamicpv-dlqb
STEP: Creating a pod to test subpath
Sep 12 13:37:27.258: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dlqb" in namespace "provisioning-8554" to be "Succeeded or Failed"
Sep 12 13:37:27.366: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 107.617ms
Sep 12 13:37:29.474: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216040848s
Sep 12 13:37:31.583: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325067642s
Sep 12 13:37:33.705: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446706308s
Sep 12 13:37:35.823: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565294204s
Sep 12 13:37:37.932: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.673825543s
Sep 12 13:37:40.039: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.781607053s
Sep 12 13:37:42.148: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.890581295s
Sep 12 13:37:44.259: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.001584217s
Sep 12 13:37:46.383: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.124641105s
Sep 12 13:37:48.492: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.234562081s
STEP: Saw pod success
Sep 12 13:37:48.493: INFO: Pod "pod-subpath-test-dynamicpv-dlqb" satisfied condition "Succeeded or Failed"
Sep 12 13:37:48.604: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-dlqb container test-container-subpath-dynamicpv-dlqb: <nil>
STEP: delete the pod
Sep 12 13:37:48.867: INFO: Waiting for pod pod-subpath-test-dynamicpv-dlqb to disappear
Sep 12 13:37:48.975: INFO: Pod pod-subpath-test-dynamicpv-dlqb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dlqb
Sep 12 13:37:48.975: INFO: Deleting pod "pod-subpath-test-dynamicpv-dlqb" in namespace "provisioning-8554"
STEP: Creating pod pod-subpath-test-dynamicpv-dlqb
STEP: Creating a pod to test subpath
Sep 12 13:37:49.197: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dlqb" in namespace "provisioning-8554" to be "Succeeded or Failed"
Sep 12 13:37:49.305: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 107.653241ms
Sep 12 13:37:51.412: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21553216s
Sep 12 13:37:53.521: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324052532s
Sep 12 13:37:55.630: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433426035s
Sep 12 13:37:57.774: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.577263916s
Sep 12 13:37:59.911: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.713845549s
Sep 12 13:38:02.050: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.853401385s
Sep 12 13:38:04.158: INFO: Pod "pod-subpath-test-dynamicpv-dlqb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.961568687s
STEP: Saw pod success
Sep 12 13:38:04.159: INFO: Pod "pod-subpath-test-dynamicpv-dlqb" satisfied condition "Succeeded or Failed"
Sep 12 13:38:04.272: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-dlqb container test-container-subpath-dynamicpv-dlqb: <nil>
STEP: delete the pod
Sep 12 13:38:04.558: INFO: Waiting for pod pod-subpath-test-dynamicpv-dlqb to disappear
Sep 12 13:38:04.703: INFO: Pod pod-subpath-test-dynamicpv-dlqb no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dlqb
Sep 12 13:38:04.703: INFO: Deleting pod "pod-subpath-test-dynamicpv-dlqb" in namespace "provisioning-8554"
... skipping 62 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:395
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":45,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:28.303: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 250 lines ...
STEP: Deleting pod hostexec-ip-172-20-48-249.eu-central-1.compute.internal-t7vrv in namespace volumemode-8123
Sep 12 13:38:19.619: INFO: Deleting pod "pod-40523249-fcf6-4af2-b607-cbcc3a3267b8" in namespace "volumemode-8123"
Sep 12 13:38:19.730: INFO: Wait up to 5m0s for pod "pod-40523249-fcf6-4af2-b607-cbcc3a3267b8" to be fully deleted
STEP: Deleting pv and pvc
Sep 12 13:38:23.949: INFO: Deleting PersistentVolumeClaim "pvc-kflxt"
Sep 12 13:38:24.063: INFO: Deleting PersistentVolume "aws-bkgqg"
Sep 12 13:38:24.376: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0bd243528bd90e0d5", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0bd243528bd90e0d5 is currently attached to i-0b4adaf6280bf4240
	status code: 400, request id: 1519aeb0-da3a-464e-a0f4-4d359d5b9253
Sep 12 13:38:29.964: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0bd243528bd90e0d5".
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:29.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-8123" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":17,"skipped":146,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:30.217: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":11,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:32.712: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 29 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-b25e88cd-68fa-46b3-9d9a-d46969a18f77
STEP: Creating a pod to test consume secrets
Sep 12 13:38:25.655: INFO: Waiting up to 5m0s for pod "pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b" in namespace "secrets-8700" to be "Succeeded or Failed"
Sep 12 13:38:25.764: INFO: Pod "pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b": Phase="Pending", Reason="", readiness=false. Elapsed: 109.088579ms
Sep 12 13:38:27.874: INFO: Pod "pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219506727s
Sep 12 13:38:29.985: INFO: Pod "pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329924386s
Sep 12 13:38:32.098: INFO: Pod "pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.443133264s
STEP: Saw pod success
Sep 12 13:38:32.098: INFO: Pod "pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b" satisfied condition "Succeeded or Failed"
Sep 12 13:38:32.207: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b container secret-volume-test: <nil>
STEP: delete the pod
Sep 12 13:38:32.435: INFO: Waiting for pod pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b to disappear
Sep 12 13:38:32.543: INFO: Pod pod-secrets-26e49517-71fa-4125-9395-bd3aad50328b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.886 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:32.787: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
Sep 12 13:38:01.836: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-7971s44tr
STEP: creating a claim
Sep 12 13:38:01.976: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Sep 12 13:38:02.219: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Sep 12 13:38:02.506: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:04.781: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:06.727: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:08.727: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:10.728: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:12.733: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:14.726: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:16.732: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:18.727: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:20.735: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:22.745: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:24.727: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:26.728: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:28.727: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:30.733: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:32.728: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-7971s44tr",
  	... // 3 identical fields
  }

Sep 12 13:38:32.952: INFO: Error updating pvc awshbhxs: PersistentVolumeClaim "awshbhxs" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":4,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/flowcontrol.go:98

  skipping test until flakiness is resolved

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/flowcontrol.go:100
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:37:59.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
Sep 12 13:37:59.886: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Sep 12 13:38:00.208: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8979" in namespace "provisioning-8979" to be "Succeeded or Failed"
Sep 12 13:38:00.392: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 183.649035ms
Sep 12 13:38:02.518: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309679692s
Sep 12 13:38:04.646: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43726069s
Sep 12 13:38:06.755: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.546972189s
Sep 12 13:38:08.867: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 8.658232439s
Sep 12 13:38:10.982: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.774023649s
STEP: Saw pod success
Sep 12 13:38:10.982: INFO: Pod "hostpath-symlink-prep-provisioning-8979" satisfied condition "Succeeded or Failed"
Sep 12 13:38:10.982: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8979" in namespace "provisioning-8979"
Sep 12 13:38:11.098: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8979" to be fully deleted
Sep 12 13:38:11.206: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-f8rd
Sep 12 13:38:17.534: INFO: Running '/tmp/kubectl3391257765/kubectl --server=https://api.e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-8979 exec pod-subpath-test-inlinevolume-f8rd --container test-container-volume-inlinevolume-f8rd -- /bin/sh -c rm -r /test-volume/provisioning-8979'
Sep 12 13:38:18.766: INFO: stderr: ""
Sep 12 13:38:18.766: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-f8rd
Sep 12 13:38:18.766: INFO: Deleting pod "pod-subpath-test-inlinevolume-f8rd" in namespace "provisioning-8979"
Sep 12 13:38:18.880: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-f8rd" to be fully deleted
STEP: Deleting pod
Sep 12 13:38:25.103: INFO: Deleting pod "pod-subpath-test-inlinevolume-f8rd" in namespace "provisioning-8979"
Sep 12 13:38:25.322: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8979" in namespace "provisioning-8979" to be "Succeeded or Failed"
Sep 12 13:38:25.430: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 108.197619ms
Sep 12 13:38:27.544: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22232979s
Sep 12 13:38:29.658: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336042889s
Sep 12 13:38:31.767: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445106744s
Sep 12 13:38:33.875: INFO: Pod "hostpath-symlink-prep-provisioning-8979": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553879804s
STEP: Saw pod success
Sep 12 13:38:33.876: INFO: Pod "hostpath-symlink-prep-provisioning-8979" satisfied condition "Succeeded or Failed"
Sep 12 13:38:33.876: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8979" in namespace "provisioning-8979"
Sep 12 13:38:33.993: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8979" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:34.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8979" for this suite.
... skipping 8 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":10,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:34.335: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 88 lines ...
Sep 12 13:38:32.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 12 13:38:33.492: INFO: Waiting up to 5m0s for pod "var-expansion-e850c38b-7cef-4237-8997-26107ea88424" in namespace "var-expansion-435" to be "Succeeded or Failed"
Sep 12 13:38:33.603: INFO: Pod "var-expansion-e850c38b-7cef-4237-8997-26107ea88424": Phase="Pending", Reason="", readiness=false. Elapsed: 111.169293ms
Sep 12 13:38:35.719: INFO: Pod "var-expansion-e850c38b-7cef-4237-8997-26107ea88424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.227428433s
STEP: Saw pod success
Sep 12 13:38:35.719: INFO: Pod "var-expansion-e850c38b-7cef-4237-8997-26107ea88424" satisfied condition "Succeeded or Failed"
Sep 12 13:38:35.833: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod var-expansion-e850c38b-7cef-4237-8997-26107ea88424 container dapi-container: <nil>
STEP: delete the pod
Sep 12 13:38:36.061: INFO: Waiting for pod var-expansion-e850c38b-7cef-4237-8997-26107ea88424 to disappear
Sep 12 13:38:36.171: INFO: Pod var-expansion-e850c38b-7cef-4237-8997-26107ea88424 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:36.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-435" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:36.400: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 99 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":77,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:37.641: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 127 lines ...
• [SLOW TEST:24.300 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":6,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Sep 12 13:38:26.137: INFO: PersistentVolumeClaim pvc-hd7dk found but phase is Pending instead of Bound.
Sep 12 13:38:28.247: INFO: PersistentVolumeClaim pvc-hd7dk found and phase=Bound (4.329058268s)
Sep 12 13:38:28.247: INFO: Waiting up to 3m0s for PersistentVolume local-grqr2 to have phase Bound
Sep 12 13:38:28.359: INFO: PersistentVolume local-grqr2 found and phase=Bound (112.647436ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4xrv
STEP: Creating a pod to test subpath
Sep 12 13:38:28.688: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4xrv" in namespace "provisioning-1653" to be "Succeeded or Failed"
Sep 12 13:38:28.797: INFO: Pod "pod-subpath-test-preprovisionedpv-4xrv": Phase="Pending", Reason="", readiness=false. Elapsed: 109.358062ms
Sep 12 13:38:30.924: INFO: Pod "pod-subpath-test-preprovisionedpv-4xrv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236459767s
Sep 12 13:38:33.041: INFO: Pod "pod-subpath-test-preprovisionedpv-4xrv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353337642s
Sep 12 13:38:35.175: INFO: Pod "pod-subpath-test-preprovisionedpv-4xrv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.487151553s
STEP: Saw pod success
Sep 12 13:38:35.175: INFO: Pod "pod-subpath-test-preprovisionedpv-4xrv" satisfied condition "Succeeded or Failed"
Sep 12 13:38:35.290: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-4xrv container test-container-volume-preprovisionedpv-4xrv: <nil>
STEP: delete the pod
Sep 12 13:38:35.521: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4xrv to disappear
Sep 12 13:38:35.630: INFO: Pod pod-subpath-test-preprovisionedpv-4xrv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4xrv
Sep 12 13:38:35.630: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4xrv" in namespace "provisioning-1653"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:37.955: INFO: Only supported for providers [azure] (not aws)
... skipping 103 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1438
------------------------------
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:39.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2890" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":7,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:39.391: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 134 lines ...
Sep 12 13:38:27.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 12 13:38:28.454: INFO: Waiting up to 5m0s for pod "pod-db812848-5858-44ca-87a6-a681d9513fdc" in namespace "emptydir-501" to be "Succeeded or Failed"
Sep 12 13:38:28.561: INFO: Pod "pod-db812848-5858-44ca-87a6-a681d9513fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 107.511921ms
Sep 12 13:38:30.670: INFO: Pod "pod-db812848-5858-44ca-87a6-a681d9513fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216182252s
Sep 12 13:38:32.779: INFO: Pod "pod-db812848-5858-44ca-87a6-a681d9513fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324594597s
Sep 12 13:38:34.905: INFO: Pod "pod-db812848-5858-44ca-87a6-a681d9513fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.45078036s
Sep 12 13:38:37.015: INFO: Pod "pod-db812848-5858-44ca-87a6-a681d9513fdc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.560726771s
Sep 12 13:38:39.124: INFO: Pod "pod-db812848-5858-44ca-87a6-a681d9513fdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.669678223s
STEP: Saw pod success
Sep 12 13:38:39.124: INFO: Pod "pod-db812848-5858-44ca-87a6-a681d9513fdc" satisfied condition "Succeeded or Failed"
Sep 12 13:38:39.232: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-db812848-5858-44ca-87a6-a681d9513fdc container test-container: <nil>
STEP: delete the pod
Sep 12 13:38:39.465: INFO: Waiting for pod pod-db812848-5858-44ca-87a6-a681d9513fdc to disappear
Sep 12 13:38:39.573: INFO: Pod pod-db812848-5858-44ca-87a6-a681d9513fdc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.005 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":102,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:39.809: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:39.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9629" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":10,"skipped":85,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:40.017: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 71 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:40.323: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 111 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Sep 12 13:38:32.753: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5920" to be "Succeeded or Failed"
Sep 12 13:38:32.863: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 110.282374ms
Sep 12 13:38:35.006: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25304447s
Sep 12 13:38:37.118: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365398287s
Sep 12 13:38:39.227: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474584229s
Sep 12 13:38:41.338: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.584968331s
Sep 12 13:38:41.338: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:41.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5920" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":12,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:41.686: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
Sep 12 13:38:42.269: INFO: Creating a PV followed by a PVC
Sep 12 13:38:42.492: INFO: Waiting for PV local-pvbrgm8 to bind to PVC pvc-vb8r6
Sep 12 13:38:42.492: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vb8r6] to have phase Bound
Sep 12 13:38:42.600: INFO: PersistentVolumeClaim pvc-vb8r6 found and phase=Bound (107.605052ms)
Sep 12 13:38:42.600: INFO: Waiting up to 3m0s for PersistentVolume local-pvbrgm8 to have phase Bound
Sep 12 13:38:42.709: INFO: PersistentVolume local-pvbrgm8 found and phase=Bound (109.012342ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
Sep 12 13:38:43.055: INFO: Waiting up to 5m0s for pod "pod-7b9c6fd3-b85d-4022-8cd0-b2960864313a" in namespace "persistent-local-volumes-test-3857" to be "Unschedulable"
Sep 12 13:38:43.185: INFO: Pod "pod-7b9c6fd3-b85d-4022-8cd0-b2960864313a": Phase="Pending", Reason="", readiness=false. Elapsed: 129.689879ms
Sep 12 13:38:43.185: INFO: Pod "pod-7b9c6fd3-b85d-4022-8cd0-b2960864313a" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:16.027 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":9,"skipped":74,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:44.495: INFO: Driver local doesn't support ext3 -- skipping
... skipping 155 lines ...
Sep 12 13:38:40.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 12 13:38:40.686: INFO: Waiting up to 5m0s for pod "downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843" in namespace "downward-api-1749" to be "Succeeded or Failed"
Sep 12 13:38:40.795: INFO: Pod "downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843": Phase="Pending", Reason="", readiness=false. Elapsed: 109.243555ms
Sep 12 13:38:42.905: INFO: Pod "downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219320774s
Sep 12 13:38:45.016: INFO: Pod "downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329796159s
STEP: Saw pod success
Sep 12 13:38:45.016: INFO: Pod "downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843" satisfied condition "Succeeded or Failed"
Sep 12 13:38:45.127: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843 container dapi-container: <nil>
STEP: delete the pod
Sep 12 13:38:45.383: INFO: Waiting for pod downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843 to disappear
Sep 12 13:38:45.495: INFO: Pod downward-api-3ce181f1-80e5-4eba-a056-b8ff674bc843 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.733 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":88,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:45.772: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 106 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":4,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 165 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":36,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:46.545: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 94 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Sep 12 13:38:42.363: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1082" to be "Succeeded or Failed"
Sep 12 13:38:42.472: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 108.486102ms
Sep 12 13:38:44.587: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224261073s
Sep 12 13:38:46.718: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.354545102s
STEP: Saw pod success
Sep 12 13:38:46.718: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Sep 12 13:38:46.837: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Sep 12 13:38:47.091: INFO: Waiting for pod pod-host-path-test to disappear
Sep 12 13:38:47.201: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.726 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":13,"skipped":77,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:47.457: INFO: Only supported for providers [openstack] (not aws)
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:48.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3402" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":10,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:48.602: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 40 lines ...
• [SLOW TEST:76.366 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":66,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:48.940: INFO: Only supported for providers [vsphere] (not aws)
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:49.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8676" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":11,"skipped":106,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:49.475: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 43 lines ...
Sep 12 13:38:39.661: INFO: PersistentVolumeClaim pvc-j8fzv found but phase is Pending instead of Bound.
Sep 12 13:38:41.771: INFO: PersistentVolumeClaim pvc-j8fzv found and phase=Bound (12.806636979s)
Sep 12 13:38:41.771: INFO: Waiting up to 3m0s for PersistentVolume local-kpwcn to have phase Bound
Sep 12 13:38:41.881: INFO: PersistentVolume local-kpwcn found and phase=Bound (109.369938ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fqfq
STEP: Creating a pod to test subpath
Sep 12 13:38:42.217: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fqfq" in namespace "provisioning-336" to be "Succeeded or Failed"
Sep 12 13:38:42.326: INFO: Pod "pod-subpath-test-preprovisionedpv-fqfq": Phase="Pending", Reason="", readiness=false. Elapsed: 109.428747ms
Sep 12 13:38:44.437: INFO: Pod "pod-subpath-test-preprovisionedpv-fqfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220413757s
Sep 12 13:38:46.559: INFO: Pod "pod-subpath-test-preprovisionedpv-fqfq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342466291s
Sep 12 13:38:48.675: INFO: Pod "pod-subpath-test-preprovisionedpv-fqfq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.45833656s
STEP: Saw pod success
Sep 12 13:38:48.675: INFO: Pod "pod-subpath-test-preprovisionedpv-fqfq" satisfied condition "Succeeded or Failed"
Sep 12 13:38:48.785: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fqfq container test-container-subpath-preprovisionedpv-fqfq: <nil>
STEP: delete the pod
Sep 12 13:38:49.030: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fqfq to disappear
Sep 12 13:38:49.140: INFO: Pod pod-subpath-test-preprovisionedpv-fqfq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fqfq
Sep 12 13:38:49.140: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fqfq" in namespace "provisioning-336"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":121,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:248.421 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:246.735 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:14.623 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":14,"skipped":113,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:54.487: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":5,"skipped":67,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:38:53.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Sep 12 13:38:54.651: INFO: found topology map[topology.kubernetes.io/zone:eu-central-1a]
Sep 12 13:38:54.651: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 12 13:38:54.651: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 27 lines ...
• [SLOW TEST:9.270 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":5,"skipped":11,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:55.693: INFO: Only supported for providers [vsphere] (not aws)
... skipping 212 lines ...
• [SLOW TEST:26.230 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":18,"skipped":150,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:56.486: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 48 lines ...
• [SLOW TEST:254.060 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:38:58.258: INFO: Only supported for providers [openstack] (not aws)
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:38:58.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1843" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":6,"skipped":70,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:9.670 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":11,"skipped":69,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:38:52.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-8e7fd604-8e92-4e4c-beeb-20325c5342fa
STEP: Creating a pod to test consume configMaps
Sep 12 13:38:53.274: INFO: Waiting up to 5m0s for pod "pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3" in namespace "configmap-4291" to be "Succeeded or Failed"
Sep 12 13:38:53.383: INFO: Pod "pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3": Phase="Pending", Reason="", readiness=false. Elapsed: 109.015653ms
Sep 12 13:38:55.494: INFO: Pod "pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219596607s
Sep 12 13:38:57.605: INFO: Pod "pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33065233s
Sep 12 13:38:59.720: INFO: Pod "pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.44564787s
STEP: Saw pod success
Sep 12 13:38:59.720: INFO: Pod "pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3" satisfied condition "Succeeded or Failed"
Sep 12 13:38:59.831: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3 container configmap-volume-test: <nil>
STEP: delete the pod
Sep 12 13:39:00.076: INFO: Waiting for pod pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3 to disappear
Sep 12 13:39:00.197: INFO: Pod pod-configmaps-852556b7-4324-419c-be7d-fe931afbeec3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.931 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:00.430: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 186 lines ...
    Requires at least 2 nodes (not 0)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:782
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":6,"skipped":58,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:05.539: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Sep 12 13:38:58.949: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-ea366f58-a0ea-4201-a65c-0c15e04e589e" in namespace "security-context-test-2544" to be "Succeeded or Failed"
Sep 12 13:38:59.057: INFO: Pod "alpine-nnp-nil-ea366f58-a0ea-4201-a65c-0c15e04e589e": Phase="Pending", Reason="", readiness=false. Elapsed: 108.154844ms
Sep 12 13:39:01.166: INFO: Pod "alpine-nnp-nil-ea366f58-a0ea-4201-a65c-0c15e04e589e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216860715s
Sep 12 13:39:03.276: INFO: Pod "alpine-nnp-nil-ea366f58-a0ea-4201-a65c-0c15e04e589e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327413948s
Sep 12 13:39:05.386: INFO: Pod "alpine-nnp-nil-ea366f58-a0ea-4201-a65c-0c15e04e589e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.43721962s
Sep 12 13:39:05.386: INFO: Pod "alpine-nnp-nil-ea366f58-a0ea-4201-a65c-0c15e04e589e" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:05.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2544" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":27,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:05.806: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":8,"skipped":53,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:38:27.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
Sep 12 13:38:36.185: INFO: Unable to read jessie_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:36.294: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:36.403: INFO: Unable to read jessie_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:36.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:36.620: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:36.732: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:37.387: INFO: Lookups using dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7451 wheezy_tcp@dns-test-service.dns-7451 wheezy_udp@dns-test-service.dns-7451.svc wheezy_tcp@dns-test-service.dns-7451.svc wheezy_udp@_http._tcp.dns-test-service.dns-7451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7451 jessie_tcp@dns-test-service.dns-7451 jessie_udp@dns-test-service.dns-7451.svc jessie_tcp@dns-test-service.dns-7451.svc jessie_udp@_http._tcp.dns-test-service.dns-7451.svc jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc]

Sep 12 13:38:42.501: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:42.610: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:42.723: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:42.853: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:42.962: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
... skipping 5 lines ...
Sep 12 13:38:44.315: INFO: Unable to read jessie_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:44.424: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:44.533: INFO: Unable to read jessie_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:44.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:44.752: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:44.861: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:45.558: INFO: Lookups using dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7451 wheezy_tcp@dns-test-service.dns-7451 wheezy_udp@dns-test-service.dns-7451.svc wheezy_tcp@dns-test-service.dns-7451.svc wheezy_udp@_http._tcp.dns-test-service.dns-7451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7451 jessie_tcp@dns-test-service.dns-7451 jessie_udp@dns-test-service.dns-7451.svc jessie_tcp@dns-test-service.dns-7451.svc jessie_udp@_http._tcp.dns-test-service.dns-7451.svc jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc]

Sep 12 13:38:47.496: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:47.605: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:47.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:47.824: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:47.933: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
... skipping 5 lines ...
Sep 12 13:38:49.253: INFO: Unable to read jessie_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:49.364: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:49.492: INFO: Unable to read jessie_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:49.628: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:49.737: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:49.855: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:50.539: INFO: Lookups using dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7451 wheezy_tcp@dns-test-service.dns-7451 wheezy_udp@dns-test-service.dns-7451.svc wheezy_tcp@dns-test-service.dns-7451.svc wheezy_udp@_http._tcp.dns-test-service.dns-7451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7451 jessie_tcp@dns-test-service.dns-7451 jessie_udp@dns-test-service.dns-7451.svc jessie_tcp@dns-test-service.dns-7451.svc jessie_udp@_http._tcp.dns-test-service.dns-7451.svc jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc]

Sep 12 13:38:52.498: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:52.615: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:52.724: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:52.833: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:52.944: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
... skipping 5 lines ...
Sep 12 13:38:54.262: INFO: Unable to read jessie_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:54.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:54.481: INFO: Unable to read jessie_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:54.590: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:54.699: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:54.808: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:55.465: INFO: Lookups using dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7451 wheezy_tcp@dns-test-service.dns-7451 wheezy_udp@dns-test-service.dns-7451.svc wheezy_tcp@dns-test-service.dns-7451.svc wheezy_udp@_http._tcp.dns-test-service.dns-7451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7451 jessie_tcp@dns-test-service.dns-7451 jessie_udp@dns-test-service.dns-7451.svc jessie_tcp@dns-test-service.dns-7451.svc jessie_udp@_http._tcp.dns-test-service.dns-7451.svc jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc]

Sep 12 13:38:57.495: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:57.605: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:57.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:57.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:57.933: INFO: Unable to read wheezy_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
... skipping 5 lines ...
Sep 12 13:38:59.265: INFO: Unable to read jessie_udp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:59.376: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451 from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:59.492: INFO: Unable to read jessie_udp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:59.615: INFO: Unable to read jessie_tcp@dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:59.725: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:38:59.837: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc from pod dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de: the server could not find the requested resource (get pods dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de)
Sep 12 13:39:00.513: INFO: Lookups using dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7451 wheezy_tcp@dns-test-service.dns-7451 wheezy_udp@dns-test-service.dns-7451.svc wheezy_tcp@dns-test-service.dns-7451.svc wheezy_udp@_http._tcp.dns-test-service.dns-7451.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7451.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7451 jessie_tcp@dns-test-service.dns-7451 jessie_udp@dns-test-service.dns-7451.svc jessie_tcp@dns-test-service.dns-7451.svc jessie_udp@_http._tcp.dns-test-service.dns-7451.svc jessie_tcp@_http._tcp.dns-test-service.dns-7451.svc]

Sep 12 13:39:05.509: INFO: DNS probes using dns-7451/dns-test-9c1736c8-4c35-43f0-8d47-7d7de3e626de succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:39.076 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:06.109: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 48 lines ...
• [SLOW TEST:8.780 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":7,"skipped":75,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 46 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:445
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":14,"skipped":82,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:07.502: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 56 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 12 13:38:57.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e" in namespace "projected-9230" to be "Succeeded or Failed"
Sep 12 13:38:57.290: INFO: Pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e": Phase="Pending", Reason="", readiness=false. Elapsed: 106.984331ms
Sep 12 13:38:59.397: INFO: Pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214419373s
Sep 12 13:39:01.506: INFO: Pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3232922s
Sep 12 13:39:03.617: INFO: Pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434128943s
Sep 12 13:39:05.730: INFO: Pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547076486s
Sep 12 13:39:07.839: INFO: Pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.655841053s
STEP: Saw pod success
Sep 12 13:39:07.839: INFO: Pod "downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e" satisfied condition "Succeeded or Failed"
Sep 12 13:39:07.947: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e container client-container: <nil>
STEP: delete the pod
Sep 12 13:39:08.173: INFO: Waiting for pod downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e to disappear
Sep 12 13:39:08.283: INFO: Pod downwardapi-volume-5f6b732e-22ef-40f1-bdfc-6764a0f6668e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.964 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":166,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Flexvolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 73 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1360
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1377
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":12,"skipped":71,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:09.579: INFO: Only supported for providers [gce gke] (not aws)
... skipping 233 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:12.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-838" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":5,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:13.070: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
Sep 12 13:38:56.316: INFO: PersistentVolumeClaim pvc-wsbxx found but phase is Pending instead of Bound.
Sep 12 13:38:58.424: INFO: PersistentVolumeClaim pvc-wsbxx found and phase=Bound (6.431589935s)
Sep 12 13:38:58.424: INFO: Waiting up to 3m0s for PersistentVolume local-q7vk2 to have phase Bound
Sep 12 13:38:58.532: INFO: PersistentVolume local-q7vk2 found and phase=Bound (107.410404ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mc42
STEP: Creating a pod to test subpath
Sep 12 13:38:58.858: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mc42" in namespace "provisioning-5562" to be "Succeeded or Failed"
Sep 12 13:38:58.965: INFO: Pod "pod-subpath-test-preprovisionedpv-mc42": Phase="Pending", Reason="", readiness=false. Elapsed: 107.310472ms
Sep 12 13:39:01.075: INFO: Pod "pod-subpath-test-preprovisionedpv-mc42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216587236s
Sep 12 13:39:03.193: INFO: Pod "pod-subpath-test-preprovisionedpv-mc42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335297322s
Sep 12 13:39:05.308: INFO: Pod "pod-subpath-test-preprovisionedpv-mc42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449609248s
Sep 12 13:39:07.422: INFO: Pod "pod-subpath-test-preprovisionedpv-mc42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563892503s
Sep 12 13:39:09.531: INFO: Pod "pod-subpath-test-preprovisionedpv-mc42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.673414446s
STEP: Saw pod success
Sep 12 13:39:09.532: INFO: Pod "pod-subpath-test-preprovisionedpv-mc42" satisfied condition "Succeeded or Failed"
Sep 12 13:39:09.639: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-mc42 container test-container-volume-preprovisionedpv-mc42: <nil>
STEP: delete the pod
Sep 12 13:39:09.861: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mc42 to disappear
Sep 12 13:39:09.969: INFO: Pod pod-subpath-test-preprovisionedpv-mc42 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mc42
Sep 12 13:39:09.969: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mc42" in namespace "provisioning-5562"
... skipping 59 lines ...
• [SLOW TEST:8.279 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be updated [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":85,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:15.817: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 88 lines ...
Sep 12 13:38:40.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 12 13:38:41.124: INFO: created pod
Sep 12 13:38:41.124: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-3052" to be "Succeeded or Failed"
Sep 12 13:38:41.234: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 109.511627ms
Sep 12 13:38:43.347: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222458931s
Sep 12 13:38:45.458: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333346536s
STEP: Saw pod success
Sep 12 13:38:45.458: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Sep 12 13:39:15.459: INFO: polling logs
Sep 12 13:39:15.571: INFO: Pod logs: 
2021/09/12 13:38:42 OK: Got token
2021/09/12 13:38:42 validating with in-cluster discovery
2021/09/12 13:38:42 OK: got issuer https://api.internal.e2e-d1d30942ba-b172d.test-cncf-aws.k8s.io
2021/09/12 13:38:42 Full, not-validated claims: 
... skipping 14 lines ...
• [SLOW TEST:35.552 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":12,"skipped":56,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Sep 12 13:39:09.050: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Sep 12 13:39:09.050: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-2fff
STEP: Creating a pod to test subpath
Sep 12 13:39:09.161: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-2fff" in namespace "provisioning-2346" to be "Succeeded or Failed"
Sep 12 13:39:09.275: INFO: Pod "pod-subpath-test-inlinevolume-2fff": Phase="Pending", Reason="", readiness=false. Elapsed: 113.88179ms
Sep 12 13:39:11.386: INFO: Pod "pod-subpath-test-inlinevolume-2fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225103717s
Sep 12 13:39:13.509: INFO: Pod "pod-subpath-test-inlinevolume-2fff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347859772s
Sep 12 13:39:15.618: INFO: Pod "pod-subpath-test-inlinevolume-2fff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.456631973s
STEP: Saw pod success
Sep 12 13:39:15.618: INFO: Pod "pod-subpath-test-inlinevolume-2fff" satisfied condition "Succeeded or Failed"
Sep 12 13:39:15.725: INFO: Trying to get logs from node ip-172-20-60-94.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-2fff container test-container-volume-inlinevolume-2fff: <nil>
STEP: delete the pod
Sep 12 13:39:15.955: INFO: Waiting for pod pod-subpath-test-inlinevolume-2fff to disappear
Sep 12 13:39:16.062: INFO: Pod pod-subpath-test-inlinevolume-2fff no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-2fff
Sep 12 13:39:16.062: INFO: Deleting pod "pod-subpath-test-inlinevolume-2fff" in namespace "provisioning-2346"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":20,"skipped":167,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:16.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-1853" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":16,"skipped":101,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":10,"skipped":101,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:17.691: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 62 lines ...
Sep 12 13:39:14.994: INFO: The status of Pod pod-update-activedeadlineseconds-7ca85269-3d3d-43fd-97e8-78b6655068dc is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 12 13:39:15.936: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7ca85269-3d3d-43fd-97e8-78b6655068dc"
Sep 12 13:39:15.936: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7ca85269-3d3d-43fd-97e8-78b6655068dc" in namespace "pods-5325" to be "terminated due to deadline exceeded"
Sep 12 13:39:16.044: INFO: Pod "pod-update-activedeadlineseconds-7ca85269-3d3d-43fd-97e8-78b6655068dc": Phase="Running", Reason="", readiness=true. Elapsed: 108.525074ms
Sep 12 13:39:18.153: INFO: Pod "pod-update-activedeadlineseconds-7ca85269-3d3d-43fd-97e8-78b6655068dc": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 2.217811284s
Sep 12 13:39:18.153: INFO: Pod "pod-update-activedeadlineseconds-7ca85269-3d3d-43fd-97e8-78b6655068dc" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:18.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5325" for this suite.

... skipping 88 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":14,"skipped":111,"failed":0}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:18.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9954" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":11,"skipped":106,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:39:18.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Sep 12 13:39:19.611: INFO: found topology map[topology.kubernetes.io/zone:eu-central-1a]
Sep 12 13:39:19.611: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Sep 12 13:39:19.611: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:20.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3280" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":12,"skipped":109,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:20.576: INFO: Only supported for providers [openstack] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 6 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-7020be67-359a-4394-905a-d5e6fc77c62b
STEP: Creating a pod to test consume configMaps
Sep 12 13:39:13.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad" in namespace "configmap-4038" to be "Succeeded or Failed"
Sep 12 13:39:14.005: INFO: Pod "pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad": Phase="Pending", Reason="", readiness=false. Elapsed: 109.722317ms
Sep 12 13:39:16.121: INFO: Pod "pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225793972s
Sep 12 13:39:18.231: INFO: Pod "pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336031643s
Sep 12 13:39:20.343: INFO: Pod "pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.447966837s
STEP: Saw pod success
Sep 12 13:39:20.343: INFO: Pod "pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad" satisfied condition "Succeeded or Failed"
Sep 12 13:39:20.453: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad container agnhost-container: <nil>
STEP: delete the pod
Sep 12 13:39:20.687: INFO: Waiting for pod pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad to disappear
Sep 12 13:39:20.801: INFO: Pod pod-configmaps-6077f274-966c-42b5-8056-886dd6460dad no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:21.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5131" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":13,"skipped":116,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:997
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1042
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":13,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:18.680 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":8,"skipped":94,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:27.688: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 122 lines ...
Sep 12 13:38:59.748: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:sctp-228 PodName:hostexec-ip-172-20-45-127.eu-central-1.compute.internal-thrl6 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 12 13:38:59.748: INFO: >>> kubeConfig: /root/.kube/config
Sep 12 13:39:00.514: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: command:   lsmod | grep sctp
Sep 12 13:39:00.514: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: stdout:    ""
Sep 12 13:39:00.514: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: stderr:    ""
Sep 12 13:39:00.514: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: exit code: 0
Sep 12 13:39:00.514: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Sep 12 13:39:00.514: INFO: the sctp module is not loaded on node: ip-172-20-45-127.eu-central-1.compute.internal
Sep 12 13:39:00.514: INFO: Executing cmd "lsmod | grep sctp" on node ip-172-20-60-94.eu-central-1.compute.internal
Sep 12 13:39:04.853: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:sctp-228 PodName:hostexec-ip-172-20-60-94.eu-central-1.compute.internal-n4ms7 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 12 13:39:04.853: INFO: >>> kubeConfig: /root/.kube/config
Sep 12 13:39:05.600: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: command:   lsmod | grep sctp
Sep 12 13:39:05.600: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: stdout:    ""
Sep 12 13:39:05.601: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: stderr:    ""
Sep 12 13:39:05.601: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: exit code: 0
Sep 12 13:39:05.601: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Sep 12 13:39:05.601: INFO: the sctp module is not loaded on node: ip-172-20-60-94.eu-central-1.compute.internal
STEP: Deleting pod hostexec-ip-172-20-60-94.eu-central-1.compute.internal-n4ms7 in namespace sctp-228
STEP: Deleting pod hostexec-ip-172-20-45-127.eu-central-1.compute.internal-thrl6 in namespace sctp-228
STEP: creating service sctp-endpoint-test in namespace sctp-228
Sep 12 13:39:06.061: INFO: Service sctp-endpoint-test in namespace sctp-228 found.
STEP: validating endpoints do not exist yet
... skipping 19 lines ...
Sep 12 13:39:20.054: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:sctp-228 PodName:hostexec-ip-172-20-45-127.eu-central-1.compute.internal-rqlr6 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 12 13:39:20.054: INFO: >>> kubeConfig: /root/.kube/config
Sep 12 13:39:20.787: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: command:   lsmod | grep sctp
Sep 12 13:39:20.787: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: stdout:    ""
Sep 12 13:39:20.787: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: stderr:    ""
Sep 12 13:39:20.787: INFO: exec ip-172-20-45-127.eu-central-1.compute.internal: exit code: 0
Sep 12 13:39:20.787: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Sep 12 13:39:20.787: INFO: the sctp module is not loaded on node: ip-172-20-45-127.eu-central-1.compute.internal
Sep 12 13:39:20.787: INFO: Executing cmd "lsmod | grep sctp" on node ip-172-20-60-94.eu-central-1.compute.internal
Sep 12 13:39:29.133: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:sctp-228 PodName:hostexec-ip-172-20-60-94.eu-central-1.compute.internal-6fwrb ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Sep 12 13:39:29.133: INFO: >>> kubeConfig: /root/.kube/config
Sep 12 13:39:29.897: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: command:   lsmod | grep sctp
Sep 12 13:39:29.897: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: stdout:    ""
Sep 12 13:39:29.897: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: stderr:    ""
Sep 12 13:39:29.897: INFO: exec ip-172-20-60-94.eu-central-1.compute.internal: exit code: 0
Sep 12 13:39:29.897: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
Sep 12 13:39:29.897: INFO: the sctp module is not loaded on node: ip-172-20-60-94.eu-central-1.compute.internal
STEP: Deleting pod hostexec-ip-172-20-45-127.eu-central-1.compute.internal-rqlr6 in namespace sctp-228
STEP: Deleting pod hostexec-ip-172-20-60-94.eu-central-1.compute.internal-6fwrb in namespace sctp-228
[AfterEach] [sig-network] SCTP [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:30.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
• [SLOW TEST:39.731 seconds]
[sig-network] SCTP [LinuxOnly]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow creating a basic SCTP service with pod and endpoints
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3220
------------------------------
{"msg":"PASSED [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints","total":-1,"completed":12,"skipped":123,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:30.487: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:31.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-5767" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":13,"skipped":129,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:32.062: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 12 13:39:17.183: INFO: Waiting up to 5m0s for pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157" in namespace "emptydir-5968" to be "Succeeded or Failed"
Sep 12 13:39:17.291: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Pending", Reason="", readiness=false. Elapsed: 107.418247ms
Sep 12 13:39:19.398: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215199847s
Sep 12 13:39:21.507: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323489193s
Sep 12 13:39:23.617: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433688633s
Sep 12 13:39:25.731: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548376653s
Sep 12 13:39:27.841: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657652507s
Sep 12 13:39:29.948: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76518362s
Sep 12 13:39:32.057: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.874021731s
STEP: Saw pod success
Sep 12 13:39:32.057: INFO: Pod "pod-48275c0e-dd38-42cb-96f2-414bb6adb157" satisfied condition "Succeeded or Failed"
Sep 12 13:39:32.164: INFO: Trying to get logs from node ip-172-20-34-134.eu-central-1.compute.internal pod pod-48275c0e-dd38-42cb-96f2-414bb6adb157 container test-container: <nil>
STEP: delete the pod
Sep 12 13:39:32.391: INFO: Waiting for pod pod-48275c0e-dd38-42cb-96f2-414bb6adb157 to disappear
Sep 12 13:39:32.499: INFO: Pod pod-48275c0e-dd38-42cb-96f2-414bb6adb157 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":21,"skipped":173,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:32.737: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":46,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:39:21.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":7,"skipped":46,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 9 lines ...
Sep 12 13:38:56.317: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9841krnh7
STEP: creating a claim
Sep 12 13:38:56.426: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-9rh6
STEP: Creating a pod to test subpath
Sep 12 13:38:56.766: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9rh6" in namespace "provisioning-9841" to be "Succeeded or Failed"
Sep 12 13:38:56.873: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 107.353423ms
Sep 12 13:38:58.981: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215636422s
Sep 12 13:39:01.091: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325089952s
Sep 12 13:39:03.204: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437917544s
Sep 12 13:39:05.320: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554802885s
Sep 12 13:39:07.429: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663448134s
Sep 12 13:39:09.538: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.772635132s
Sep 12 13:39:11.685: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.919677591s
Sep 12 13:39:13.793: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.027744331s
Sep 12 13:39:15.903: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Pending", Reason="", readiness=false. Elapsed: 19.137025719s
Sep 12 13:39:18.015: INFO: Pod "pod-subpath-test-dynamicpv-9rh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.249348923s
STEP: Saw pod success
Sep 12 13:39:18.015: INFO: Pod "pod-subpath-test-dynamicpv-9rh6" satisfied condition "Succeeded or Failed"
Sep 12 13:39:18.125: INFO: Trying to get logs from node ip-172-20-48-249.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-9rh6 container test-container-subpath-dynamicpv-9rh6: <nil>
STEP: delete the pod
Sep 12 13:39:18.365: INFO: Waiting for pod pod-subpath-test-dynamicpv-9rh6 to disappear
Sep 12 13:39:18.473: INFO: Pod pod-subpath-test-dynamicpv-9rh6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9rh6
Sep 12 13:39:18.473: INFO: Deleting pod "pod-subpath-test-dynamicpv-9rh6" in namespace "provisioning-9841"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:365
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:34.789: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
Sep 12 13:39:06.185: INFO: PersistentVolume nfs-5snzh found and phase=Bound (111.492616ms)
Sep 12 13:39:06.292: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-vgdc6] to have phase Bound
Sep 12 13:39:06.400: INFO: PersistentVolumeClaim pvc-vgdc6 found and phase=Bound (108.287807ms)
STEP: Checking pod has write access to PersistentVolumes
Sep 12 13:39:06.512: INFO: Creating nfs test pod
Sep 12 13:39:06.620: INFO: Pod should terminate with exitcode 0 (success)
Sep 12 13:39:06.620: INFO: Waiting up to 5m0s for pod "pvc-tester-rcvbv" in namespace "pv-5548" to be "Succeeded or Failed"
Sep 12 13:39:06.728: INFO: Pod "pvc-tester-rcvbv": Phase="Pending", Reason="", readiness=false. Elapsed: 107.413048ms
Sep 12 13:39:08.836: INFO: Pod "pvc-tester-rcvbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215310786s
Sep 12 13:39:10.945: INFO: Pod "pvc-tester-rcvbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324192701s
Sep 12 13:39:13.054: INFO: Pod "pvc-tester-rcvbv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.433117541s
STEP: Saw pod success
Sep 12 13:39:13.054: INFO: Pod "pvc-tester-rcvbv" satisfied condition "Succeeded or Failed"
Sep 12 13:39:13.054: INFO: Pod pvc-tester-rcvbv succeeded 
Sep 12 13:39:13.054: INFO: Deleting pod "pvc-tester-rcvbv" in namespace "pv-5548"
Sep 12 13:39:13.169: INFO: Wait up to 5m0s for pod "pvc-tester-rcvbv" to be fully deleted
Sep 12 13:39:13.390: INFO: Creating nfs test pod
Sep 12 13:39:13.515: INFO: Pod should terminate with exitcode 0 (success)
Sep 12 13:39:13.515: INFO: Waiting up to 5m0s for pod "pvc-tester-7sqlt" in namespace "pv-5548" to be "Succeeded or Failed"
Sep 12 13:39:13.623: INFO: Pod "pvc-tester-7sqlt": Phase="Pending", Reason="", readiness=false. Elapsed: 108.188131ms
Sep 12 13:39:15.731: INFO: Pod "pvc-tester-7sqlt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216418373s
Sep 12 13:39:17.842: INFO: Pod "pvc-tester-7sqlt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327161746s
Sep 12 13:39:19.951: INFO: Pod "pvc-tester-7sqlt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436341273s
Sep 12 13:39:22.064: INFO: Pod "pvc-tester-7sqlt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549150611s
Sep 12 13:39:24.172: INFO: Pod "pvc-tester-7sqlt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.657500454s
STEP: Saw pod success
Sep 12 13:39:24.172: INFO: Pod "pvc-tester-7sqlt" satisfied condition "Succeeded or Failed"
Sep 12 13:39:24.172: INFO: Pod pvc-tester-7sqlt succeeded 
Sep 12 13:39:24.172: INFO: Deleting pod "pvc-tester-7sqlt" in namespace "pv-5548"
Sep 12 13:39:24.289: INFO: Wait up to 5m0s for pod "pvc-tester-7sqlt" to be fully deleted
STEP: Deleting PVCs to invoke reclaim policy
Sep 12 13:39:24.858: INFO: Deleting PVC pvc-vgdc6 to trigger reclamation of PV nfs-5snzh
Sep 12 13:39:24.858: INFO: Deleting PersistentVolumeClaim "pvc-vgdc6"
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":12,"skipped":114,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:34.819: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 97 lines ...
Sep 12 13:39:28.452: INFO: Waiting for pod aws-client to disappear
Sep 12 13:39:28.561: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Sep 12 13:39:28.561: INFO: Deleting PersistentVolumeClaim "pvc-snpb9"
Sep 12 13:39:28.672: INFO: Deleting PersistentVolume "aws-l55ch"
Sep 12 13:39:29.380: INFO: Couldn't delete PD "aws://eu-central-1a/vol-0053bc47d007c8d84", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-0053bc47d007c8d84 is currently attached to i-0297e8026ac566bba
	status code: 400, request id: 5f4ec6ff-4b67-4665-9920-d1073c5c5d47
Sep 12 13:39:34.971: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0053bc47d007c8d84".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:34.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-5322" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":12,"skipped":186,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":69,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:39:14.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:21.087 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":9,"skipped":69,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:35.242: INFO: Only supported for providers [openstack] (not aws)
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:35.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-3290" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":8,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:35.303: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 73 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:35.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":9,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":47,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:35.854: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:36.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5475" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":7,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:36.275: INFO: Only supported for providers [azure] (not aws)
... skipping 40 lines ...
• [SLOW TEST:8.537 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":9,"skipped":116,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:36.324: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 53 lines ...
STEP: Destroying namespace "apply-2653" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":13,"skipped":188,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:36.756: INFO: Only supported for providers [vsphere] (not aws)
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:36.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1155" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":13,"skipped":120,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] PrivilegedPod [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:13.396 seconds]
[sig-node] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":14,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:38.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-407" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":4,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 12 13:39:37.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Sep 12 13:39:37.682: INFO: Waiting up to 5m0s for pod "security-context-2cf56b33-e15e-4ae5-80f5-42283bb9ba7d" in namespace "security-context-9565" to be "Succeeded or Failed"
Sep 12 13:39:37.792: INFO: Pod "security-context-2cf56b33-e15e-4ae5-80f5-42283bb9ba7d": Phase="Pending", Reason="", readiness=false. Elapsed: 110.226467ms
Sep 12 13:39:39.900: INFO: Pod "security-context-2cf56b33-e15e-4ae5-80f5-42283bb9ba7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218284878s
STEP: Saw pod success
Sep 12 13:39:39.900: INFO: Pod "security-context-2cf56b33-e15e-4ae5-80f5-42283bb9ba7d" satisfied condition "Succeeded or Failed"
Sep 12 13:39:40.014: INFO: Trying to get logs from node ip-172-20-45-127.eu-central-1.compute.internal pod security-context-2cf56b33-e15e-4ae5-80f5-42283bb9ba7d container test-container: <nil>
STEP: delete the pod
Sep 12 13:39:40.246: INFO: Waiting for pod security-context-2cf56b33-e15e-4ae5-80f5-42283bb9ba7d to disappear
Sep 12 13:39:40.356: INFO: Pod security-context-2cf56b33-e15e-4ae5-80f5-42283bb9ba7d no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:40.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9565" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":14,"skipped":121,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
Sep 12 13:39:15.163: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:15.274: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:16.070: INFO: Unable to read jessie_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:16.180: INFO: Unable to read jessie_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:16.290: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:16.401: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:17.064: INFO: Lookups using dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683 failed for: [wheezy_udp@dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_udp@dns-test-service.dns-8260.svc.cluster.local jessie_tcp@dns-test-service.dns-8260.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local]

Sep 12 13:39:22.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:22.284: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:22.396: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:22.507: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:23.294: INFO: Unable to read jessie_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:23.404: INFO: Unable to read jessie_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:23.518: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:23.629: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:24.302: INFO: Lookups using dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683 failed for: [wheezy_udp@dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_udp@dns-test-service.dns-8260.svc.cluster.local jessie_tcp@dns-test-service.dns-8260.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local]

Sep 12 13:39:27.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:27.285: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:27.396: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:27.509: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:28.287: INFO: Unable to read jessie_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:28.403: INFO: Unable to read jessie_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:28.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:28.623: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:29.285: INFO: Lookups using dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683 failed for: [wheezy_udp@dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_udp@dns-test-service.dns-8260.svc.cluster.local jessie_tcp@dns-test-service.dns-8260.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local]

Sep 12 13:39:32.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:32.285: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:32.395: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:32.505: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:33.287: INFO: Unable to read jessie_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:33.397: INFO: Unable to read jessie_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:33.507: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:33.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:34.290: INFO: Lookups using dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683 failed for: [wheezy_udp@dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_udp@dns-test-service.dns-8260.svc.cluster.local jessie_tcp@dns-test-service.dns-8260.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local]

Sep 12 13:39:37.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:37.293: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:37.406: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:37.516: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:38.304: INFO: Unable to read jessie_udp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:38.414: INFO: Unable to read jessie_tcp@dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:38.526: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:38.636: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local from pod dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683: the server could not find the requested resource (get pods dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683)
Sep 12 13:39:39.303: INFO: Lookups using dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683 failed for: [wheezy_udp@dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@dns-test-service.dns-8260.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_udp@dns-test-service.dns-8260.svc.cluster.local jessie_tcp@dns-test-service.dns-8260.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8260.svc.cluster.local]

Sep 12 13:39:44.328: INFO: DNS probes using dns-8260/dns-test-3dcb14e8-68b4-4e4a-87f4-7e4843d7e683 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:39.321 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:44.927: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 149 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":15,"skipped":97,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
Sep 12 13:39:36.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 12 13:39:38.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
Sep 12 13:39:40.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63767050774, loc:(*time.Location)(0xa5cc7a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 12 13:39:43.837: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 12 13:39:44.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7622" for this suite.
... skipping 2 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• [SLOW TEST:12.631 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":22,"skipped":186,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Sep 12 13:39:45.425: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 33465 lines ...






5-eda8-4831-a4e6-de04aed49874\" PVC=\"persistent-local-volumes-expansion-4045/pvc-v65m7\"\nI0912 13:41:36.762688       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-expansion-4045/pvc-v65m7\"\nI0912 13:41:36.866443       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4389/test-rs\" need=4 creating=2\nI0912 13:41:36.871092       1 event.go:294] \"Event occurred\" object=\"replicaset-4389/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-m5j27\"\nI0912 13:41:36.892515       1 namespace_controller.go:185] Namespace has been deleted provisioning-9712\nI0912 13:41:36.893725       1 event.go:294] \"Event occurred\" object=\"replicaset-4389/test-rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rs-4cnxp\"\nI0912 13:41:36.941622       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9712-3211/csi-hostpathplugin-67df98db6b\" objectUID=98378abb-a568-4c7f-879d-d8f375c296b0 kind=\"ControllerRevision\" virtual=false\nI0912 13:41:36.941888       1 stateful_set.go:440] StatefulSet has been deleted provisioning-9712-3211/csi-hostpathplugin\nI0912 13:41:36.941963       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9712-3211/csi-hostpathplugin-0\" objectUID=0113e896-144b-4d76-a56d-5926cabab505 kind=\"Pod\" virtual=false\nI0912 13:41:36.947228       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9712-3211/csi-hostpathplugin-67df98db6b\" objectUID=98378abb-a568-4c7f-879d-d8f375c296b0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:41:36.947416       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9712-3211/csi-hostpathplugin-0\" objectUID=0113e896-144b-4d76-a56d-5926cabab505 kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:36.961699       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-expansion-4045/pod-c1c30b15-eda8-4831-a4e6-de04aed49874\" PVC=\"persistent-local-volumes-expansion-4045/pvc-v65m7\"\nI0912 13:41:36.961901       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-expansion-4045/pvc-v65m7\"\nI0912 13:41:36.968770       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-expansion-4045/pvc-v65m7\"\nI0912 13:41:36.976975       1 pv_controller.go:640] volume \"local-pvh76xv\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:41:36.981067       1 pv_controller.go:879] volume \"local-pvh76xv\" entered phase \"Released\"\nI0912 13:41:36.985857       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-expansion-4045/pvc-v65m7\" was already processed\nI0912 13:41:37.478740       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-6680\nI0912 13:41:38.106382       1 namespace_controller.go:185] Namespace has been deleted volumemode-3119\nE0912 13:41:38.315738       1 tokens_controller.go:262] error synchronizing serviceaccount endpointslice-1816/default: secrets \"default-token-4z4mf\" is forbidden: unable to create new content in namespace endpointslice-1816 because it is being terminated\nI0912 13:41:38.356226       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:41:38.356402       1 event.go:294] \"Event occurred\" object=\"provisioning-7262/pod-subpath-test-dynamicpv-t5m8\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\\\" \"\nE0912 13:41:38.776424       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:41:38.782179       1 namespace_controller.go:185] Namespace has been deleted services-6336\nI0912 13:41:38.888315       1 stateful_set_control.go:521] StatefulSet statefulset-2162/ss terminating Pod ss-2 for scale down\nI0912 13:41:38.894895       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0912 13:41:39.216831       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-1985/pvc-wdwxm\"\nI0912 13:41:39.260618       1 pv_controller.go:640] volume \"pvc-bcfbee09-a9f1-44b6-9359-cb46d2d1f750\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:41:39.292446       1 pv_controller.go:879] volume \"pvc-bcfbee09-a9f1-44b6-9359-cb46d2d1f750\" entered phase \"Released\"\nI0912 13:41:39.306908       1 namespace_controller.go:185] Namespace has been deleted metadata-concealment-8297\nI0912 13:41:39.324762       1 pv_controller.go:1340] isVolumeReleased[pvc-bcfbee09-a9f1-44b6-9359-cb46d2d1f750]: volume is released\nI0912 13:41:39.388966       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-1985/pvc-wdwxm\" was already processed\nI0912 13:41:39.654957       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4023\nI0912 13:41:40.057612       1 stateful_set_control.go:521] StatefulSet statefulset-2162/ss terminating Pod ss-1 for scale down\nI0912 13:41:40.070800       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0912 13:41:40.752163       1 namespace_controller.go:185] Namespace has been deleted volume-9808\nI0912 13:41:41.539109       1 pv_controller.go:930] claim \"volume-7955/pvc-p2bqb\" bound to volume \"aws-j5cc2\"\nI0912 13:41:41.549070       1 pv_controller.go:879] volume \"aws-j5cc2\" entered phase \"Bound\"\nI0912 13:41:41.549304       1 pv_controller.go:982] volume \"aws-j5cc2\" bound to claim \"volume-7955/pvc-p2bqb\"\nI0912 13:41:41.557663       1 pv_controller.go:823] claim \"volume-7955/pvc-p2bqb\" entered phase \"Bound\"\nI0912 13:41:41.829228       1 namespace_controller.go:185] Namespace has been deleted kubectl-9825\nI0912 13:41:42.232290       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:41:42.244791       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nE0912 13:41:42.338981       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9712-3211/default: secrets \"default-token-fpwj4\" is forbidden: unable to create new content in namespace provisioning-9712-3211 because it is being terminated\nI0912 13:41:42.344172       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replicaset-4389/test-rs\" need=4 creating=1\nI0912 13:41:42.356444       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-4389/test-rs-5kjzc\" objectUID=7c28f1a7-b476-4f09-b50f-249302c269ce kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:42.367804       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-4389/test-rs-5kjzc\" objectUID=7c28f1a7-b476-4f09-b50f-249302c269ce kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:42.379154       1 garbagecollector.go:471] \"Processing object\" object=\"replicaset-4389/test-rs-vdhtl\" objectUID=3c70030f-a3e4-48a8-aa36-4fd476ce36db kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:42.388791       1 garbagecollector.go:580] \"Deleting object\" object=\"replicaset-4389/test-rs-vdhtl\" objectUID=3c70030f-a3e4-48a8-aa36-4fd476ce36db kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:42.897836       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6758/webserver-ddb74847c\" need=3 creating=3\nI0912 13:41:42.898839       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-ddb74847c to 3\"\nI0912 13:41:42.902230       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6758/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:42.905174       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-dvz4p\"\nI0912 13:41:42.913865       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-m4qpt\"\nI0912 13:41:42.913987       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-x45zr\"\nI0912 13:41:42.921878       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=8 deleting=2\nI0912 13:41:42.921919       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-7fb4dff56c webserver-6584b976d5 webserver-ddb74847c]\nI0912 13:41:42.922048       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-cwld8\"\nI0912 13:41:42.922210       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-dx72q\"\nI0912 13:41:42.924078       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 8\"\nI0912 13:41:42.934570       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6758/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:42.947413       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-cwld8\" objectUID=a7ca5299-1f73-4e5a-9c11-d0eb20670400 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:42.948698       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-ddb74847c to 5\"\nI0912 13:41:42.954867       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-dx72q\" objectUID=d58bedda-d3d6-4dd5-923b-3b72aead7324 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:42.956602       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-dx72q\"\nI0912 13:41:42.956630       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-cwld8\"\nI0912 13:41:42.977217       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6758/webserver-ddb74847c\" need=5 creating=2\nI0912 13:41:42.977560       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6758/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:42.988994       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-tv98p\"\nI0912 13:41:42.997483       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-tnpw8\"\nI0912 13:41:42.997592       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-dx72q\" objectUID=d58bedda-d3d6-4dd5-923b-3b72aead7324 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:42.998417       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-cwld8\" objectUID=a7ca5299-1f73-4e5a-9c11-d0eb20670400 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:43.459766       1 namespace_controller.go:185] Namespace has been deleted nettest-8541\nI0912 13:41:43.473028       1 namespace_controller.go:185] Namespace has been deleted endpointslice-1816\nE0912 13:41:43.561094       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:41:43.713195       1 namespace_controller.go:185] Namespace has been deleted hostpath-2570\nI0912 13:41:43.800887       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-expansion-4045\nI0912 13:41:43.844382       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:41:44.252404       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=7 deleting=1\nI0912 13:41:44.252437       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-ddb74847c webserver-7fb4dff56c webserver-6584b976d5]\nI0912 13:41:44.253159       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 7\"\nI0912 13:41:44.253264       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-nbc8l\"\nI0912 13:41:44.262924       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6758/webserver-ddb74847c\" need=6 creating=1\nI0912 13:41:44.262939       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-ddb74847c to 6\"\nI0912 13:41:44.267225       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-nbc8l\" objectUID=c57ee7e2-c5a2-45d7-904c-f4de80509b4e kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:44.267848       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6758/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:44.268969       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-nbc8l\"\nI0912 13:41:44.276839       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-nbc8l\" objectUID=c57ee7e2-c5a2-45d7-904c-f4de80509b4e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:44.277486       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-frbvl\"\nI0912 13:41:44.534415       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=6 deleting=1\nI0912 13:41:44.534457       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-6584b976d5 webserver-ddb74847c webserver-7fb4dff56c]\nI0912 13:41:44.534739       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-2jvhg\"\nI0912 13:41:44.535897       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 6\"\nI0912 13:41:44.543929       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-2jvhg\"\nI0912 13:41:44.544397       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-2jvhg\" objectUID=33570878-44d4-4216-ba4b-2ce8417fc94d kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:44.552875       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6758/webserver-ddb74847c\" need=7 creating=1\nI0912 13:41:44.553359       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-ddb74847c to 7\"\nI0912 13:41:44.556775       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-2jvhg\" objectUID=33570878-44d4-4216-ba4b-2ce8417fc94d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:44.560718       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6758/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:44.561198       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-grfpz\"\nI0912 13:41:44.800120       1 namespace_controller.go:185] Namespace has been deleted security-context-test-554\nI0912 13:41:44.933505       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=5 deleting=1\nI0912 13:41:44.933550       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-7fb4dff56c webserver-6584b976d5 webserver-ddb74847c]\nI0912 13:41:44.933958       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-flmgp\"\nI0912 13:41:44.934597       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 5\"\nI0912 13:41:44.952571       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6758/webserver-ddb74847c\" need=8 creating=1\nI0912 13:41:44.953061       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-ddb74847c to 8\"\nI0912 13:41:44.953085       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-flmgp\"\nI0912 13:41:44.953158       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-flmgp\" objectUID=aafb7f06-0e3d-46c6-b76a-21a3b38be609 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:44.962596       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-flmgp\" objectUID=aafb7f06-0e3d-46c6-b76a-21a3b38be609 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:44.968656       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-k64zd\"\nI0912 13:41:44.993290       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6758/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:45.131378       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-8372/pvc-vht6d\"\nI0912 13:41:45.140094       1 pv_controller.go:640] volume \"local-nxst7\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:41:45.143626       1 pv_controller.go:879] volume \"local-nxst7\" entered phase \"Released\"\nI0912 13:41:45.251689       1 pv_controller_base.go:505] deletion of claim \"volumemode-8372/pvc-vht6d\" was already processed\nI0912 13:41:45.271053       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=4 deleting=1\nI0912 13:41:45.271091       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-ddb74847c webserver-7fb4dff56c webserver-6584b976d5]\nI0912 13:41:45.271232       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-bnh27\"\nI0912 13:41:45.271782       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 4\"\nI0912 13:41:45.283220       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6758/webserver-ddb74847c\" need=9 creating=1\nI0912 13:41:45.285543       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-ddb74847c to 9\"\nI0912 13:41:45.294209       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-czjlt\"\nI0912 13:41:45.295564       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-bnh27\" objectUID=c287ffaf-9438-40ab-9496-87897337b3a1 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:45.296559       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-bnh27\"\nI0912 13:41:45.310724       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-bnh27\" objectUID=c287ffaf-9438-40ab-9496-87897337b3a1 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:41:45.422685       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:41:46.388182       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-1707/pvc-9cpfm\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:41:46.701796       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-5885/my-hostname-basic-7a1c2f32-83c2-44ec-be86-4df03de222a0\" need=1 creating=1\nI0912 13:41:46.709755       1 event.go:294] \"Event occurred\" object=\"replication-controller-5885/my-hostname-basic-7a1c2f32-83c2-44ec-be86-4df03de222a0\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: my-hostname-basic-7a1c2f32-83c2-44ec-be86-4df03de222a0-6v8zt\"\nE0912 13:41:46.840543       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-1985/default: secrets \"default-token-jpjqg\" is forbidden: unable to create new content in namespace csi-mock-volumes-1985 because it is being terminated\nI0912 13:41:47.066365       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0912 13:41:47.066931       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\" objectUID=5d619c9c-28fa-4025-a354-b7cda6811bcb kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:41:47.067024       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf\" objectUID=83f95a54-9eca-4469-b5da-85400d34d2b0 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:47.067350       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\" objectUID=d134a2f2-d24b-4caa-8648-928e55e641e5 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:41:47.067580       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf\" objectUID=29147406-d8da-409a-b2c8-b5b9dc5ec2af kind=\"Pod\" virtual=false\nI0912 13:41:47.075468       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf-my-volume-1, uid: d134a2f2-d24b-4caa-8648-928e55e641e5] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af] is deletingDependents\nI0912 13:41:47.075573       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 83f95a54-9eca-4469-b5da-85400d34d2b0] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af] is deletingDependents\nI0912 13:41:47.075640       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf-my-volume-0, uid: 5d619c9c-28fa-4025-a354-b7cda6811bcb] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af] is deletingDependents\nI0912 13:41:47.079264       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\" objectUID=5d619c9c-28fa-4025-a354-b7cda6811bcb kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0912 13:41:47.079477       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\" objectUID=d134a2f2-d24b-4caa-8648-928e55e641e5 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0912 13:41:47.084464       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf\" objectUID=83f95a54-9eca-4469-b5da-85400d34d2b0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:47.091224       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\" objectUID=d134a2f2-d24b-4caa-8648-928e55e641e5 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:41:47.093514       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-7905/inline-volume-tester-5bgdf\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\"\nI0912 13:41:47.093543       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\"\nI0912 13:41:47.096064       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\" objectUID=5d619c9c-28fa-4025-a354-b7cda6811bcb kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:41:47.096106       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-7905/inline-volume-tester-5bgdf\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\"\nI0912 13:41:47.096204       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\"\nI0912 13:41:47.097020       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=3 deleting=1\nI0912 13:41:47.097050       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-7fb4dff56c webserver-6584b976d5 webserver-ddb74847c]\nI0912 13:41:47.097180       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-tgv5k\"\nI0912 13:41:47.099486       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf\" objectUID=29147406-d8da-409a-b2c8-b5b9dc5ec2af kind=\"Pod\" virtual=false\nI0912 13:41:47.100148       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 3\"\nI0912 13:41:47.101938       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf\" objectUID=83f95a54-9eca-4469-b5da-85400d34d2b0 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:47.105265       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf-my-volume-0, uid: 5d619c9c-28fa-4025-a354-b7cda6811bcb] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af] is deletingDependents\nI0912 13:41:47.105289       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf-my-volume-1, uid: d134a2f2-d24b-4caa-8648-928e55e641e5] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af] is deletingDependents\nI0912 13:41:47.105311       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\" objectUID=5d619c9c-28fa-4025-a354-b7cda6811bcb kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:41:47.110146       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-tgv5k\" objectUID=927352dc-fea0-4ec1-9d45-d648c273b587 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:47.114019       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\" objectUID=d134a2f2-d24b-4caa-8648-928e55e641e5 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0912 13:41:47.114721       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-tgv5k\"\nI0912 13:41:47.119383       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-tgv5k\" objectUID=927352dc-fea0-4ec1-9d45-d648c273b587 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:47.120264       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-6758/webserver-ddb74847c\" need=10 creating=1\nI0912 13:41:47.122485       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-ddb74847c to 10\"\nI0912 13:41:47.133058       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\" objectUID=d134a2f2-d24b-4caa-8648-928e55e641e5 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:41:47.133636       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-ddb74847c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-ddb74847c-z9q26\"\nI0912 13:41:47.140925       1 event.go:294] \"Event occurred\" object=\"volume-4030/awsbmpgl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:41:47.293990       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-1707/pvc-9cpfm\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:41:47.296069       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-1707/pvc-9cpfm\"\nI0912 13:41:47.378712       1 event.go:294] \"Event occurred\" object=\"volume-4030/awsbmpgl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0912 13:41:47.448404       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:41:47.448567       1 event.go:294] \"Event occurred\" object=\"volume-7955/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-j5cc2\\\" \"\nI0912 13:41:47.553769       1 namespace_controller.go:185] Namespace has been deleted replicaset-4389\nI0912 13:41:47.769875       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=2 deleting=1\nI0912 13:41:47.769914       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-7fb4dff56c webserver-6584b976d5 webserver-ddb74847c]\nI0912 13:41:47.770016       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-875lx\"\nI0912 13:41:47.770646       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 2\"\nI0912 13:41:47.793953       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-875lx\"\nI0912 13:41:47.794507       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-875lx\" objectUID=15e1b3a1-795b-449d-8b34-ac4d55940ad0 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:47.804654       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-6758/webserver\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:47.805000       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-875lx\" objectUID=15e1b3a1-795b-449d-8b34-ac4d55940ad0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:48.036934       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5297/pvc-7nfkl\"\nI0912 13:41:48.055341       1 pv_controller.go:640] volume \"local-br7sd\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:41:48.063183       1 pv_controller.go:879] volume \"local-br7sd\" entered phase \"Released\"\nI0912 13:41:48.141754       1 pv_controller_base.go:505] deletion of claim \"provisioning-5297/pvc-7nfkl\" was already processed\nE0912 13:41:48.230450       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:41:48.268203       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 1\"\nI0912 13:41:48.268622       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=1 deleting=1\nI0912 13:41:48.268723       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-7fb4dff56c webserver-6584b976d5 webserver-ddb74847c]\nI0912 13:41:48.268892       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-krw7l\"\nE0912 13:41:48.275235       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:41:48.277909       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-krw7l\" objectUID=0b9d9c7b-c2d0-4f65-9064-74ba7e7c76d4 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:48.281391       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-krw7l\"\nI0912 13:41:48.286416       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-krw7l\" objectUID=0b9d9c7b-c2d0-4f65-9064-74ba7e7c76d4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nW0912 13:41:48.746322       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:41:48.854654       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-440/service-headless\" need=3 creating=3\nW0912 13:41:48.863938       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:41:48.867014       1 event.go:294] \"Event occurred\" object=\"services-440/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-v8lrr\"\nI0912 13:41:48.875335       1 event.go:294] \"Event occurred\" object=\"services-440/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-drzt2\"\nI0912 13:41:48.875367       1 event.go:294] \"Event occurred\" object=\"services-440/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-qhrth\"\nW0912 13:41:48.876370       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW0912 13:41:48.895832       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:41:48.972591       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7fb4dff56c to 0\"\nI0912 13:41:48.972841       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" need=0 deleting=1\nI0912 13:41:48.972964       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-6758/webserver-7fb4dff56c\" relatedReplicaSets=[webserver-7fb4dff56c webserver-6584b976d5 webserver-ddb74847c]\nI0912 13:41:48.973150       1 controller_utils.go:592] \"Deleting pod\" controller=\"webserver-7fb4dff56c\" pod=\"deployment-6758/webserver-7fb4dff56c-nhfn7\"\nI0912 13:41:48.983683       1 event.go:294] \"Event occurred\" object=\"deployment-6758/webserver-7fb4dff56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7fb4dff56c-nhfn7\"\nI0912 13:41:48.983847       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c-nhfn7\" objectUID=7e4caff2-95b9-4b6a-adcf-1ffca0200b64 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:48.989123       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c-nhfn7\" objectUID=7e4caff2-95b9-4b6a-adcf-1ffca0200b64 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:49.736721       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nW0912 13:41:49.751053       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW0912 13:41:50.089662       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW0912 13:41:50.275616       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:41:50.736144       1 pv_controller.go:879] volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" entered phase \"Bound\"\nI0912 13:41:50.736186       1 pv_controller.go:982] volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" bound to claim \"volume-4030/awsbmpgl\"\nI0912 13:41:50.744207       1 pv_controller.go:823] claim \"volume-4030/awsbmpgl\" entered phase \"Bound\"\nI0912 13:41:50.905794       1 namespace_controller.go:185] Namespace has been deleted nettest-9785\nI0912 13:41:51.000182       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-596d4f5878\" objectUID=8696fc96-949e-4b23-9dff-35d2077945ce kind=\"ControllerRevision\" virtual=false\nI0912 13:41:51.000623       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1985-4982/csi-mockplugin\nI0912 13:41:51.000740       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-0\" objectUID=44a7ab48-5226-44e9-a8d2-756d6ccbaa9f kind=\"Pod\" virtual=false\nI0912 13:41:51.003054       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-596d4f5878\" objectUID=8696fc96-949e-4b23-9dff-35d2077945ce kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:41:51.003526       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-0\" objectUID=44a7ab48-5226-44e9-a8d2-756d6ccbaa9f kind=\"Pod\" propagationPolicy=Background\nE0912 13:41:51.062188       1 tokens_controller.go:262] error synchronizing serviceaccount prestop-6994/default: secrets \"default-token-5x5cj\" is forbidden: unable to create new content in namespace prestop-6994 because it is being terminated\nE0912 13:41:51.068706       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-3995/pvc-p9k8f: storageclass.storage.k8s.io \"provisioning-3995\" not found\nI0912 13:41:51.068991       1 event.go:294] \"Event occurred\" object=\"provisioning-3995/pvc-p9k8f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3995\\\" not found\"\nW0912 13:41:51.097844       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW0912 13:41:51.104409       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:41:51.151754       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 1\"\nI0912 13:41:51.151938       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" need=1 creating=1\nI0912 13:41:51.162120       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-69jp2\"\nI0912 13:41:51.164855       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:51.191659       1 pv_controller.go:879] volume \"local-d9q24\" entered phase \"Available\"\nI0912 13:41:51.233307       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-resizer-69df97c8d9\" objectUID=dc701705-9211-4114-82a5-8fb84fa1ef26 kind=\"ControllerRevision\" virtual=false\nI0912 13:41:51.233734       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1985-4982/csi-mockplugin-resizer\nI0912 13:41:51.234215       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-resizer-0\" objectUID=dbe1a421-3995-4cdb-a962-dbfc9d022c81 kind=\"Pod\" virtual=false\nI0912 13:41:51.235664       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-resizer-69df97c8d9\" objectUID=dc701705-9211-4114-82a5-8fb84fa1ef26 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:41:51.243488       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1985-4982/csi-mockplugin-resizer-0\" objectUID=dbe1a421-3995-4cdb-a962-dbfc9d022c81 kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:51.421766       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0afb3a58e59b0d5b3\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nE0912 13:41:51.497440       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-8372/default: secrets \"default-token-8cd58\" is forbidden: unable to create new content in namespace volumemode-8372 because it is being terminated\nI0912 13:41:51.543430       1 namespace_controller.go:185] Namespace has been deleted pod-network-test-2755\nW0912 13:41:51.826886       1 reconciler.go:335] Multi-Attach error for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-34-134.eu-central-1.compute.internal and can't be attached to another\nI0912 13:41:51.827107       1 event.go:294] \"Event occurred\" object=\"provisioning-7262/pod-subpath-test-dynamicpv-t5m8\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0912 13:41:51.878101       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1985\nI0912 13:41:51.937428       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:41:51.940864       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:41:52.063600       1 stateful_set_control.go:521] StatefulSet statefulset-2162/ss terminating Pod ss-0 for scale down\nI0912 13:41:52.071726       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0912 13:41:52.336631       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-440/service-headless-toggled\" need=3 creating=3\nI0912 13:41:52.340842       1 event.go:294] \"Event occurred\" object=\"services-440/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-64rpg\"\nI0912 13:41:52.347242       1 event.go:294] \"Event occurred\" object=\"services-440/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-dd85f\"\nI0912 13:41:52.354702       1 event.go:294] \"Event occurred\" object=\"services-440/service-headless-toggled\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-toggled-cpdkg\"\nI0912 13:41:53.385195       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/verify-service-up-exec-pod-dgswr\" objectUID=4f99acb8-2549-4e7e-9403-19dfdc33b2fc kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:53.392137       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/verify-service-up-exec-pod-dgswr\" objectUID=4f99acb8-2549-4e7e-9403-19dfdc33b2fc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:53.429552       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" need=2 deleting=1\nI0912 13:41:53.429805       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0912 13:41:53.429981       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-5161/test-rolling-update-with-lb-864fb64577-rfb84\"\nI0912 13:41:53.430289       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 2\"\nI0912 13:41:53.453667       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577-rfb84\" objectUID=fae1fddc-404e-4083-8cac-2398b2259349 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:53.456883       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-rfb84\"\nI0912 13:41:53.456994       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" need=2 creating=1\nI0912 13:41:53.459761       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nI0912 13:41:53.464458       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577-rfb84\" objectUID=fae1fddc-404e-4083-8cac-2398b2259349 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:53.474918       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-pz8sr\"\nI0912 13:41:53.475298       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:53.492700       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-8229/startup-d88266de-627c-4895-b3b0-d6e96175c68c\" objectUID=99e4a762-d7fd-4db5-9e77-205c5183e5fe kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:53.510200       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-8229/startup-d88266de-627c-4895-b3b0-d6e96175c68c\" objectUID=99e4a762-d7fd-4db5-9e77-205c5183e5fe kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:53.664043       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0afb3a58e59b0d5b3\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:41:53.664271       1 event.go:294] \"Event occurred\" object=\"volume-4030/exec-volume-test-dynamicpv-zs92\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\\\" \"\nW0912 13:41:54.428434       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-2162/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0912 13:41:54.956364       1 event.go:294] \"Event occurred\" object=\"volumelimits-1503-2763/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:41:55.818158       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-1653/httpd\" objectUID=887d8aab-eb54-449d-bfc3-083be8862e46 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:55.822696       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-1653/httpd\" objectUID=887d8aab-eb54-449d-bfc3-083be8862e46 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:56.540423       1 pv_controller.go:930] claim \"provisioning-3995/pvc-p9k8f\" bound to volume \"local-d9q24\"\nI0912 13:41:56.549631       1 pv_controller.go:879] volume \"local-d9q24\" entered phase \"Bound\"\nI0912 13:41:56.549820       1 pv_controller.go:982] volume \"local-d9q24\" bound to claim \"provisioning-3995/pvc-p9k8f\"\nI0912 13:41:56.560240       1 pv_controller.go:823] claim \"provisioning-3995/pvc-p9k8f\" entered phase \"Bound\"\nI0912 13:41:56.600639       1 namespace_controller.go:185] Namespace has been deleted volumemode-8372\nI0912 13:41:56.798722       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" need=1 deleting=1\nI0912 13:41:56.798758       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0912 13:41:56.798830       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-5161/test-rolling-update-with-lb-864fb64577-wbknt\"\nI0912 13:41:56.799455       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 1\"\nI0912 13:41:56.809215       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577-wbknt\" objectUID=b4da3841-56b8-457a-b3d5-2018e7649d5c kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:56.811553       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-wbknt\"\nI0912 13:41:56.815105       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" need=3 creating=1\nI0912 13:41:56.833158       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 3\"\nI0912 13:41:56.833272       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577-wbknt\" objectUID=b4da3841-56b8-457a-b3d5-2018e7649d5c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:56.865706       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-xv57z\"\nI0912 13:41:56.912868       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:56.953367       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-74b9796bd\" objectUID=d4ac4516-78dc-4b69-8b4b-fcbdb3d8494e kind=\"ControllerRevision\" virtual=false\nI0912 13:41:56.953605       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1707-4421/csi-mockplugin\nI0912 13:41:56.953636       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-0\" objectUID=167468fb-055f-4aee-b995-5cd6cbad2fa6 kind=\"Pod\" virtual=false\nI0912 13:41:56.966019       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-0\" objectUID=167468fb-055f-4aee-b995-5cd6cbad2fa6 kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:56.966080       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-74b9796bd\" objectUID=d4ac4516-78dc-4b69-8b4b-fcbdb3d8494e kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:41:57.201260       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-attacher-6f9dc65994\" objectUID=2fe8307e-1b12-4a9b-a766-f152c0e7e92f kind=\"ControllerRevision\" virtual=false\nI0912 13:41:57.201604       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-1707-4421/csi-mockplugin-attacher\nI0912 13:41:57.201650       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-attacher-0\" objectUID=7bc81707-3f80-4290-8421-652eef4ab0ab kind=\"Pod\" virtual=false\nI0912 13:41:57.204333       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-attacher-6f9dc65994\" objectUID=2fe8307e-1b12-4a9b-a766-f152c0e7e92f kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:41:57.204504       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-1707-4421/csi-mockplugin-attacher-0\" objectUID=7bc81707-3f80-4290-8421-652eef4ab0ab kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:57.608797       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [resourcequota.example.com/v1, Resource=e2e-test-resourcequota-7437-crds], removed: []\nI0912 13:41:57.609095       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for e2e-test-resourcequota-7437-crds.resourcequota.example.com\nI0912 13:41:57.609172       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0912 13:41:57.624329       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-4577/aws22m9h\"\nI0912 13:41:57.632949       1 pv_controller.go:640] volume \"pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:41:57.635981       1 pv_controller.go:879] volume \"pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9\" entered phase \"Released\"\nI0912 13:41:57.639274       1 pv_controller.go:1340] isVolumeReleased[pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9]: volume is released\nI0912 13:41:57.688902       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1707\nI0912 13:41:57.709378       1 shared_informer.go:247] Caches are synced for resource quota \nI0912 13:41:57.709405       1 resource_quota_controller.go:454] synced quota controller\nI0912 13:41:57.852272       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-552/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0912 13:41:57.853091       1 event.go:294] \"Event occurred\" object=\"webhook-552/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0912 13:41:57.862630       1 event.go:294] \"Event occurred\" object=\"webhook-552/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-2smzw\"\nI0912 13:41:57.867249       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-552/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:57.883698       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-552/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:58.118264       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [resourcequota.example.com/v1, Resource=e2e-test-resourcequota-7437-crds], removed: []\nI0912 13:41:58.145862       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0912 13:41:58.145918       1 shared_informer.go:247] Caches are synced for garbage collector \nI0912 13:41:58.145926       1 garbagecollector.go:254] synced garbage collector\nI0912 13:41:58.221507       1 namespace_controller.go:185] Namespace has been deleted downward-api-3077\nI0912 13:41:58.675146       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 0\"\nI0912 13:41:58.675479       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" need=0 deleting=1\nI0912 13:41:58.675508       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0912 13:41:58.675689       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-5161/test-rolling-update-with-lb-864fb64577-scjmt\"\nI0912 13:41:58.688619       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-scjmt\"\nI0912 13:41:58.690680       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577-scjmt\" objectUID=357ac9f6-7503-4fe3-b835-b4c84a502eb2 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:58.699328       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-864fb64577-scjmt\" objectUID=357ac9f6-7503-4fe3-b835-b4c84a502eb2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:41:58.719432       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-2-tg7xv\" objectUID=43cca24d-df90-4a6c-a1c0-2b63ccd50216 kind=\"EndpointSlice\" virtual=false\nI0912 13:41:58.725785       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-2-tg7xv\" objectUID=43cca24d-df90-4a6c-a1c0-2b63ccd50216 kind=\"EndpointSlice\" propagationPolicy=Background\nI0912 13:41:58.741086       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-3-xmb7w\" objectUID=54064261-fe3d-4eb9-9f76-92957c5fb69f kind=\"EndpointSlice\" virtual=false\nI0912 13:41:58.745550       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-3-xmb7w\" objectUID=54064261-fe3d-4eb9-9f76-92957c5fb69f kind=\"EndpointSlice\" propagationPolicy=Background\nI0912 13:41:58.749161       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4\" need=1 creating=1\nI0912 13:41:58.750541       1 event.go:294] \"Event occurred\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-697cdbd8f4 to 1\"\nI0912 13:41:58.765198       1 event.go:294] \"Event occurred\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-697cdbd8f4-mw8lk\"\nI0912 13:41:58.770034       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-crd-conversion-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:41:59.233641       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-2-hjwgl\" objectUID=1c122813-f34d-48af-b570-a571a1336267 kind=\"Pod\" virtual=false\nI0912 13:41:59.233840       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-2-pc5km\" objectUID=44f1131e-144f-4c11-ad7e-83bd0f9cd33e kind=\"Pod\" virtual=false\nI0912 13:41:59.233855       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-2-dsfmm\" objectUID=16cb483b-0cda-46b5-8a9c-e0b4a6c72d9e kind=\"Pod\" virtual=false\nI0912 13:41:59.237775       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-2-hjwgl\" objectUID=1c122813-f34d-48af-b570-a571a1336267 kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:59.237939       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-2-pc5km\" objectUID=44f1131e-144f-4c11-ad7e-83bd0f9cd33e kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:59.238444       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-2-dsfmm\" objectUID=16cb483b-0cda-46b5-8a9c-e0b4a6c72d9e kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:59.240906       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-3-47qqf\" objectUID=1546beec-807e-4f9a-a8c5-7d4aac6133cf kind=\"Pod\" virtual=false\nI0912 13:41:59.241208       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-3-rl69z\" objectUID=ffc65038-4687-4d32-bdf5-111ffff5bdfa kind=\"Pod\" virtual=false\nI0912 13:41:59.241584       1 garbagecollector.go:471] \"Processing object\" object=\"services-4185/up-down-3-tch48\" objectUID=599090ff-5601-49c7-af94-c2816ba321c7 kind=\"Pod\" virtual=false\nI0912 13:41:59.253113       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-3-47qqf\" objectUID=1546beec-807e-4f9a-a8c5-7d4aac6133cf kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:59.256538       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-3-rl69z\" objectUID=ffc65038-4687-4d32-bdf5-111ffff5bdfa kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:59.256879       1 garbagecollector.go:580] \"Deleting object\" object=\"services-4185/up-down-3-tch48\" objectUID=599090ff-5601-49c7-af94-c2816ba321c7 kind=\"Pod\" propagationPolicy=Background\nE0912 13:41:59.376571       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:41:59.440698       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05ba31300f1075e59\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:41:59.452689       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:41:59.457752       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05ba31300f1075e59\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:41:59.472386       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:41:59.472835       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:41:59.484311       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:41:59.485128       1 event.go:294] \"Event occurred\" object=\"provisioning-231-2257/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:41:59.487700       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0912 13:41:59.526957       1 namespace_controller.go:185] Namespace has been deleted provisioning-5297\nI0912 13:41:59.608387       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-5885/my-hostname-basic-7a1c2f32-83c2-44ec-be86-4df03de222a0-6v8zt\" objectUID=712387ab-bfdc-47ab-baeb-c64f24aacfc3 kind=\"Pod\" virtual=false\nI0912 13:41:59.611232       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-5885/my-hostname-basic-7a1c2f32-83c2-44ec-be86-4df03de222a0-6v8zt\" objectUID=712387ab-bfdc-47ab-baeb-c64f24aacfc3 kind=\"Pod\" propagationPolicy=Background\nI0912 13:41:59.619226       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-5885/my-hostname-basic-7a1c2f32-83c2-44ec-be86-4df03de222a0-6v8zt\" objectUID=488e493a-ea2b-433c-be63-5fe5f6ad98e2 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:41:59.623822       1 garbagecollector.go:580] \"Deleting object\" object=\"replication-controller-5885/my-hostname-basic-7a1c2f32-83c2-44ec-be86-4df03de222a0-6v8zt\" objectUID=488e493a-ea2b-433c-be63-5fe5f6ad98e2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:41:59.697146       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-5885/default: secrets \"default-token-zvwhr\" is forbidden: unable to create new content in namespace replication-controller-5885 because it is being terminated\nE0912 13:41:59.711755       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:41:59.785717       1 event.go:294] \"Event occurred\" object=\"provisioning-231/csi-hostpathln78g\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-231\\\" or manually created by system administrator\"\nI0912 13:41:59.931030       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" need=1 creating=1\nI0912 13:41:59.935174       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 1\"\nI0912 13:41:59.939557       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-5jkfg\"\nI0912 13:41:59.948908       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0912 13:41:59.979340       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:42:00.121441       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-193/replace-27190901--1-ljl5p\" objectUID=0f83f62e-ef48-4cb8-8cee-83803a4f5712 kind=\"Pod\" virtual=false\nI0912 13:42:00.121906       1 job_controller.go:406] enqueueing job cronjob-193/replace-27190901\nI0912 13:42:00.122670       1 event.go:294] \"Event occurred\" object=\"cronjob-193/replace\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted job replace-27190901\"\nI0912 13:42:00.126399       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-193/replace-27190901--1-ljl5p\" objectUID=0f83f62e-ef48-4cb8-8cee-83803a4f5712 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.130763       1 event.go:294] \"Event occurred\" object=\"cronjob-193/replace\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job replace-27190902\"\nI0912 13:42:00.131382       1 job_controller.go:406] enqueueing job cronjob-193/replace-27190902\nI0912 13:42:00.149296       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-193/replace\" resourceVersion=\"20748\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"replace\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0912 13:42:00.149326       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-193/replace, requeuing: Operation cannot be fulfilled on cronjobs.batch \"replace\": the object has been modified; please apply your changes to the latest version and try again\nI0912 13:42:00.152920       1 event.go:294] \"Event occurred\" object=\"cronjob-193/replace-27190902\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: replace-27190902--1-lczpp\"\nI0912 13:42:00.153668       1 job_controller.go:406] enqueueing job cronjob-193/replace-27190902\nI0912 13:42:00.165758       1 job_controller.go:406] enqueueing job cronjob-193/replace-27190902\nI0912 13:42:00.166225       1 job_controller.go:406] enqueueing job cronjob-193/replace-27190902\nE0912 13:42:00.248861       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:42:00.304773       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-1195/quota-for-e2e-test-resourcequota-7437-crds\nE0912 13:42:00.426642       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nE0912 13:42:00.644906       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nE0912 13:42:00.689355       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-9342/pvc-6cxjz: storageclass.storage.k8s.io \"volumemode-9342\" not found\nI0912 13:42:00.689730       1 event.go:294] \"Event occurred\" object=\"volumemode-9342/pvc-6cxjz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-9342\\\" not found\"\nI0912 13:42:00.802605       1 pv_controller.go:879] volume \"local-xf52x\" entered phase \"Available\"\nI0912 13:42:00.922729       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-6584b976d5\" objectUID=b3841808-3f1d-4523-8799-647356d8ea8b kind=\"ReplicaSet\" virtual=false\nI0912 13:42:00.923768       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-6758/webserver\"\nI0912 13:42:00.925172       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-7fb4dff56c\" objectUID=f2955c88-ff37-4164-9302-000010dd83ba kind=\"ReplicaSet\" virtual=false\nI0912 13:42:00.925414       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c\" objectUID=d62675de-0121-4675-bf55-79a65e845072 kind=\"ReplicaSet\" virtual=false\nI0912 13:42:00.928491       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-7fb4dff56c\" objectUID=f2955c88-ff37-4164-9302-000010dd83ba kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:42:00.928763       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c\" objectUID=d62675de-0121-4675-bf55-79a65e845072 kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:42:00.928843       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-6584b976d5\" objectUID=b3841808-3f1d-4523-8799-647356d8ea8b kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:42:00.934705       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-x45zr\" objectUID=8f321999-f15e-4b9a-b975-90fb81a612e9 kind=\"Pod\" virtual=false\nI0912 13:42:00.935040       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-tv98p\" objectUID=2aa78b36-793a-4466-81fd-f0e7af7c19b0 kind=\"Pod\" virtual=false\nI0912 13:42:00.935224       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-frbvl\" objectUID=d623d8b9-6766-4ff2-9be4-d2e8e0b07057 kind=\"Pod\" virtual=false\nI0912 13:42:00.935349       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-z9q26\" objectUID=bfd7497a-2f11-4fb2-9faf-9dd02f348d38 kind=\"Pod\" virtual=false\nI0912 13:42:00.935455       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-k64zd\" objectUID=1364ca3c-e4c7-4f13-976a-08b696ac0268 kind=\"Pod\" virtual=false\nI0912 13:42:00.935561       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-czjlt\" objectUID=4cde7a57-36cb-4f2d-bb60-5b98124af9c8 kind=\"Pod\" virtual=false\nI0912 13:42:00.935672       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-dvz4p\" objectUID=2cff28d1-8ec4-4887-bdb9-f92856ae77a9 kind=\"Pod\" virtual=false\nI0912 13:42:00.935767       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-m4qpt\" objectUID=6f2a36b5-b361-4511-b038-50a1ceb56d50 kind=\"Pod\" virtual=false\nI0912 13:42:00.935851       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-tnpw8\" objectUID=7a04dac9-a75d-4b6f-9494-62d341ac9fda kind=\"Pod\" virtual=false\nI0912 13:42:00.935935       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-grfpz\" objectUID=6b536cd2-6646-4542-87b9-e64ca3b7b7f4 kind=\"Pod\" virtual=false\nI0912 13:42:00.944380       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-tv98p\" objectUID=2aa78b36-793a-4466-81fd-f0e7af7c19b0 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.944390       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-k64zd\" objectUID=1364ca3c-e4c7-4f13-976a-08b696ac0268 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.944434       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-m4qpt\" objectUID=6f2a36b5-b361-4511-b038-50a1ceb56d50 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.944504       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-grfpz\" objectUID=6b536cd2-6646-4542-87b9-e64ca3b7b7f4 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.945035       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-czjlt\" objectUID=4cde7a57-36cb-4f2d-bb60-5b98124af9c8 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.945314       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-dvz4p\" objectUID=2cff28d1-8ec4-4887-bdb9-f92856ae77a9 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.945546       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-tnpw8\" objectUID=7a04dac9-a75d-4b6f-9494-62d341ac9fda kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.946062       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-frbvl\" objectUID=d623d8b9-6766-4ff2-9be4-d2e8e0b07057 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.947297       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-x45zr\" objectUID=8f321999-f15e-4b9a-b975-90fb81a612e9 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.951179       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-z9q26\" objectUID=bfd7497a-2f11-4fb2-9faf-9dd02f348d38 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:00.959844       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-tv98p\" objectUID=b0c4e027-1d32-4cab-b9f3-7b4fd4bac47f kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:00.966890       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-tv98p\" objectUID=b0c4e027-1d32-4cab-b9f3-7b4fd4bac47f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:00.996992       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-k64zd\" objectUID=ff32da12-7e20-4f82-9da0-396ac680cab9 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.002024       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-dvz4p\" objectUID=8ff58ffa-b4f0-45c5-8234-ce7dcc6f3682 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.003299       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-k64zd\" objectUID=ff32da12-7e20-4f82-9da0-396ac680cab9 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:01.007322       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-x45zr\" objectUID=15cff5da-d330-4c32-9c10-3228341439e0 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.007657       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-grfpz\" objectUID=1fee4174-b295-4fb3-add7-277506dacb47 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.007723       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-frbvl\" objectUID=099f76e7-0ef7-46e8-8a46-a82521e94d20 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.018438       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-dvz4p\" objectUID=8ff58ffa-b4f0-45c5-8234-ce7dcc6f3682 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:01.018555       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-m4qpt\" objectUID=c32dd26c-476b-4fdb-b2ab-adb462ba7a4e kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.018861       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-czjlt\" objectUID=fcee6e9f-cd1c-4d64-b68d-704a4993c93c kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.019039       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-z9q26\" objectUID=e71b24f4-2b77-4b39-9d6b-1e925a3bbe6f kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.021571       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-tnpw8\" objectUID=6ad05555-a086-484e-8863-ab2f7cfa11de kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:01.030731       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-x45zr\" objectUID=15cff5da-d330-4c32-9c10-3228341439e0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:01.064455       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:42:01.081696       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-grfpz\" objectUID=1fee4174-b295-4fb3-add7-277506dacb47 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:01.128090       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-frbvl\" objectUID=099f76e7-0ef7-46e8-8a46-a82521e94d20 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:01.279606       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-czjlt\" objectUID=fcee6e9f-cd1c-4d64-b68d-704a4993c93c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:01.329230       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-6758/webserver-ddb74847c-z9q26\" objectUID=e71b24f4-2b77-4b39-9d6b-1e925a3bbe6f kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:01.478995       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"webserver-ddb74847c-grfpz\", UID:\"1fee4174-b295-4fb3-add7-277506dacb47\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-6758\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-ddb74847c-grfpz\", UID:\"6b536cd2-6646-4542-87b9-e64ca3b7b7f4\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0x4003504b96)}}}: ciliumendpoints.cilium.io \"webserver-ddb74847c-grfpz\" not found\nI0912 13:42:01.484252       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-grfpz\" objectUID=1fee4174-b295-4fb3-add7-277506dacb47 kind=\"CiliumEndpoint\" virtual=false\nE0912 13:42:01.577748       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"cilium.io/v2\", Kind:\"CiliumEndpoint\", Name:\"webserver-ddb74847c-czjlt\", UID:\"fcee6e9f-cd1c-4d64-b68d-704a4993c93c\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-6758\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-ddb74847c-czjlt\", UID:\"4cde7a57-36cb-4f2d-bb60-5b98124af9c8\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(0x40026eeca6)}}}: ciliumendpoints.cilium.io \"webserver-ddb74847c-czjlt\" not found\nI0912 13:42:01.582991       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-6758/webserver-ddb74847c-czjlt\" objectUID=fcee6e9f-cd1c-4d64-b68d-704a4993c93c kind=\"CiliumEndpoint\" virtual=false\nE0912 13:42:01.601559       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nE0912 13:42:01.672617       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-1262/pvc-sgfrd: storageclass.storage.k8s.io \"provisioning-1262\" not found\nI0912 13:42:01.672927       1 event.go:294] \"Event occurred\" object=\"provisioning-1262/pvc-sgfrd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-1262\\\" not found\"\nI0912 13:42:01.673174       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1985-4982\nI0912 13:42:01.717410       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-193/replace-27190902\" objectUID=df09afda-392c-412f-8fcd-8cfed830ab64 kind=\"Job\" virtual=false\nI0912 13:42:01.778664       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-193/replace-27190902\" objectUID=df09afda-392c-412f-8fcd-8cfed830ab64 kind=\"Job\" propagationPolicy=Background\nI0912 13:42:01.801175       1 pv_controller.go:879] volume \"local-75pt2\" entered phase \"Available\"\nI0912 13:42:01.829984       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-193/replace-27190902--1-lczpp\" objectUID=d843e7f1-406d-4f50-aec5-3a8300e07b22 kind=\"Pod\" virtual=false\nI0912 13:42:01.830181       1 job_controller.go:406] enqueueing job cronjob-193/replace-27190902\nI0912 13:42:01.878752       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-193/replace-27190902--1-lczpp\" objectUID=d843e7f1-406d-4f50-aec5-3a8300e07b22 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:02.389142       1 event.go:294] \"Event occurred\" object=\"statefulset-8989/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-0 in StatefulSet test-ss successful\"\nE0912 13:42:02.519685       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-1707-4421/default: secrets \"default-token-kzkqw\" is forbidden: unable to create new content in namespace csi-mock-volumes-1707-4421 because it is being terminated\nE0912 13:42:02.823723       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:42:03.199421       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-4030/awsbmpgl\"\nI0912 13:42:03.211020       1 pv_controller.go:640] volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:03.215007       1 pv_controller.go:879] volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" entered phase \"Released\"\nI0912 13:42:03.218298       1 pv_controller.go:1340] isVolumeReleased[pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd]: volume is released\nE0912 13:42:03.616789       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:03.957931       1 namespace_controller.go:185] Namespace has been deleted container-probe-8229\nI0912 13:42:04.202476       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-2569/inline-volume-tester-r7xj2\" PVC=\"ephemeral-2569/inline-volume-tester-r7xj2-my-volume-0\"\nI0912 13:42:04.202512       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-2569/inline-volume-tester-r7xj2-my-volume-0\"\nI0912 13:42:04.213789       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-2569/inline-volume-tester-r7xj2-my-volume-0\"\nI0912 13:42:04.222232       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2569/inline-volume-tester-r7xj2\" objectUID=48415683-4f0f-4145-bdff-415c3ea3e3ce kind=\"Pod\" virtual=false\nI0912 13:42:04.225536       1 pv_controller.go:640] volume \"pvc-0287415b-22a3-4b71-8e4a-8d1e2ba0d833\" is released and reclaim policy \"Delete\" will be executed\nE0912 13:42:04.227901       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:04.228094       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-2569, name: inline-volume-tester-r7xj2, uid: 48415683-4f0f-4145-bdff-415c3ea3e3ce]\nI0912 13:42:04.236463       1 pv_controller.go:879] volume \"pvc-0287415b-22a3-4b71-8e4a-8d1e2ba0d833\" entered phase \"Released\"\nI0912 13:42:04.240534       1 pv_controller.go:1340] isVolumeReleased[pvc-0287415b-22a3-4b71-8e4a-8d1e2ba0d833]: volume is released\nI0912 13:42:04.260423       1 pv_controller_base.go:505] deletion of claim \"ephemeral-2569/inline-volume-tester-r7xj2-my-volume-0\" was already processed\nE0912 13:42:04.268575       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:42:04.860301       1 namespace_controller.go:185] Namespace has been deleted replication-controller-5885\nI0912 13:42:05.065134       1 namespace_controller.go:185] Namespace has been deleted pods-4549\nI0912 13:42:05.136485       1 pv_controller.go:879] volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" entered phase \"Bound\"\nI0912 13:42:05.136526       1 pv_controller.go:982] volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" bound to claim \"provisioning-231/csi-hostpathln78g\"\nI0912 13:42:05.196380       1 pv_controller.go:823] claim \"provisioning-231/csi-hostpathln78g\" entered phase \"Bound\"\nI0912 13:42:05.444485       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:06.239421       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nI0912 13:42:06.239679       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" need=2 deleting=1\nI0912 13:42:06.239708       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0912 13:42:06.239808       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-69jp2\"\nI0912 13:42:06.260089       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-69jp2\"\nI0912 13:42:06.262217       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-69jp2\" objectUID=987c4481-e404-47a2-a858-3c5627c569ad kind=\"CiliumEndpoint\" virtual=false\nW0912 13:42:06.272795       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-5161/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0912 13:42:06.273199       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 2\"\nI0912 13:42:06.273632       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" need=2 creating=1\nI0912 13:42:06.275551       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-69jp2\" objectUID=987c4481-e404-47a2-a858-3c5627c569ad kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:06.284442       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-6v4xd\"\nI0912 13:42:06.295870       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:42:06.507379       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-552/e2e-test-webhook-kp6zt\" objectUID=d42274bc-0476-4494-a7d1-684726a48e5c kind=\"EndpointSlice\" virtual=false\nI0912 13:42:06.510894       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-552/e2e-test-webhook-kp6zt\" objectUID=d42274bc-0476-4494-a7d1-684726a48e5c kind=\"EndpointSlice\" propagationPolicy=Background\nI0912 13:42:06.568360       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-231^3770b608-13cf-11ec-b2b4-8286f03a97cf\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:06.624768       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-552/sample-webhook-deployment-78988fc6cd\" objectUID=472b0ae0-6360-49fe-b3bf-6c4325a582d4 kind=\"ReplicaSet\" virtual=false\nI0912 13:42:06.624816       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-552/sample-webhook-deployment\"\nI0912 13:42:06.629231       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-552/sample-webhook-deployment-78988fc6cd\" objectUID=472b0ae0-6360-49fe-b3bf-6c4325a582d4 kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:42:06.634229       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-552/sample-webhook-deployment-78988fc6cd-2smzw\" objectUID=8c003076-d9fb-40ef-a9d7-5b51aa5258c3 kind=\"Pod\" virtual=false\nI0912 13:42:06.636080       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-552/sample-webhook-deployment-78988fc6cd-2smzw\" objectUID=8c003076-d9fb-40ef-a9d7-5b51aa5258c3 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:06.646035       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-552/sample-webhook-deployment-78988fc6cd-2smzw\" objectUID=c8f3c06c-edd6-4a52-934e-b7acf63873c2 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:06.650872       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-552/sample-webhook-deployment-78988fc6cd-2smzw\" objectUID=c8f3c06c-edd6-4a52-934e-b7acf63873c2 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:06.706302       1 tokens_controller.go:262] error synchronizing serviceaccount container-runtime-3957/default: secrets \"default-token-sp5lc\" is forbidden: unable to create new content in namespace container-runtime-3957 because it is being terminated\nE0912 13:42:07.052382       1 tokens_controller.go:262] error synchronizing serviceaccount cronjob-193/default: secrets \"default-token-p2ldd\" is forbidden: unable to create new content in namespace cronjob-193 because it is being terminated\nI0912 13:42:07.073509       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:07.074958       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:07.098755       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:07.101504       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-231^3770b608-13cf-11ec-b2b4-8286f03a97cf\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:07.102093       1 event.go:294] \"Event occurred\" object=\"provisioning-231/pod-subpath-test-dynamicpv-744n\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\\\" \"\nI0912 13:42:07.175983       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nE0912 13:42:07.212687       1 namespace_controller.go:162] deletion of namespace services-4185 failed: unexpected items still remain in namespace: services-4185 for gvr: /v1, Resource=pods\nI0912 13:42:07.491656       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-3995/pvc-p9k8f\"\nI0912 13:42:07.508287       1 pv_controller.go:640] volume \"local-d9q24\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:42:07.512252       1 pv_controller.go:879] volume \"local-d9q24\" entered phase \"Released\"\nI0912 13:42:07.604692       1 pv_controller_base.go:505] deletion of claim \"provisioning-3995/pvc-p9k8f\" was already processed\nI0912 13:42:07.801951       1 namespace_controller.go:185] Namespace has been deleted kubectl-1653\nI0912 13:42:07.823307       1 garbagecollector.go:471] \"Processing object\" object=\"volumelimits-1503-2763/csi-hostpathplugin-66bd7748fc\" objectUID=1fcedb4b-712a-4afd-85b6-5c6eb7f73bf3 kind=\"ControllerRevision\" virtual=false\nI0912 13:42:07.823744       1 stateful_set.go:440] StatefulSet has been deleted volumelimits-1503-2763/csi-hostpathplugin\nI0912 13:42:07.823832       1 garbagecollector.go:471] \"Processing object\" object=\"volumelimits-1503-2763/csi-hostpathplugin-0\" objectUID=afdb0728-ab57-4cf0-8173-dab813a77075 kind=\"Pod\" virtual=false\nI0912 13:42:07.826780       1 garbagecollector.go:580] \"Deleting object\" object=\"volumelimits-1503-2763/csi-hostpathplugin-66bd7748fc\" objectUID=1fcedb4b-712a-4afd-85b6-5c6eb7f73bf3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:42:07.827117       1 garbagecollector.go:580] \"Deleting object\" object=\"volumelimits-1503-2763/csi-hostpathplugin-0\" objectUID=afdb0728-ab57-4cf0-8173-dab813a77075 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:07.838561       1 namespace_controller.go:185] Namespace has been deleted volumelimits-1503\nI0912 13:42:07.911096       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-1707-4421\nE0912 13:42:07.938713       1 tokens_controller.go:262] error synchronizing serviceaccount server-version-6937/default: secrets \"default-token-lhz7t\" is forbidden: unable to create new content in namespace server-version-6937 because it is being terminated\nI0912 13:42:07.963287       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4419/e2e-test-crd-conversion-webhook-d5vsz\" objectUID=ad8d5f5e-d044-432b-a15a-98007f92774f kind=\"EndpointSlice\" virtual=false\nI0912 13:42:07.974320       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4419/e2e-test-crd-conversion-webhook-d5vsz\" objectUID=ad8d5f5e-d044-432b-a15a-98007f92774f kind=\"EndpointSlice\" propagationPolicy=Background\nI0912 13:42:08.097674       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=bea446b4-c67e-407d-abeb-ab39eeea3c50 kind=\"ReplicaSet\" virtual=false\nI0912 13:42:08.098091       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment\"\nI0912 13:42:08.105250       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4\" objectUID=bea446b4-c67e-407d-abeb-ab39eeea3c50 kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:42:08.131716       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4-mw8lk\" objectUID=8ea0f0be-22ca-4828-8295-8ed54c1ca60f kind=\"Pod\" virtual=false\nI0912 13:42:08.136755       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4-mw8lk\" objectUID=8ea0f0be-22ca-4828-8295-8ed54c1ca60f kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:08.168971       1 garbagecollector.go:471] \"Processing object\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4-mw8lk\" objectUID=dc439a9c-d012-4b6e-96a2-05bb41183f7d kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:08.175172       1 garbagecollector.go:580] \"Deleting object\" object=\"crd-webhook-4419/sample-crd-conversion-webhook-deployment-697cdbd8f4-mw8lk\" objectUID=dc439a9c-d012-4b6e-96a2-05bb41183f7d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:08.241251       1 namespace_controller.go:185] Namespace has been deleted deployment-6758\nI0912 13:42:08.752444       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" need=1 deleting=1\nI0912 13:42:08.752643       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0912 13:42:08.752878       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-xv57z\"\nI0912 13:42:08.757497       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 1\"\nI0912 13:42:08.833205       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-xv57z\" objectUID=21ad770c-30d7-40b3-b73e-8a7184a98ee4 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:08.834719       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-xv57z\"\nI0912 13:42:08.847946       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" need=3 creating=1\nI0912 13:42:08.853700       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 3\"\nI0912 13:42:08.857189       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-xv57z\" objectUID=21ad770c-30d7-40b3-b73e-8a7184a98ee4 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:08.865773       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-lkr82\"\nI0912 13:42:08.868581       1 namespace_controller.go:185] Namespace has been deleted provisioning-9712-3211\nI0912 13:42:09.285458       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:09.285946       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss-0\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\\\" \"\nI0912 13:42:09.474505       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:09.474695       1 event.go:294] \"Event occurred\" object=\"provisioning-7262/pod-subpath-test-dynamicpv-t5m8\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\\\" \"\nE0912 13:42:11.027043       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-2569/default: secrets \"default-token-wxwlw\" is forbidden: unable to create new content in namespace ephemeral-2569 because it is being terminated\nE0912 13:42:11.329297       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-552/default: secrets \"default-token-t25tq\" is forbidden: unable to create new content in namespace webhook-552 because it is being terminated\nI0912 13:42:11.546060       1 pv_controller.go:930] claim \"volumemode-9342/pvc-6cxjz\" bound to volume \"local-xf52x\"\nI0912 13:42:11.548985       1 pv_controller.go:1340] isVolumeReleased[pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9]: volume is released\nI0912 13:42:11.552143       1 pv_controller.go:1340] isVolumeReleased[pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd]: volume is released\nI0912 13:42:11.558777       1 pv_controller.go:879] volume \"local-xf52x\" entered phase \"Bound\"\nI0912 13:42:11.558810       1 pv_controller.go:982] volume \"local-xf52x\" bound to claim \"volumemode-9342/pvc-6cxjz\"\nI0912 13:42:11.570291       1 pv_controller.go:823] claim \"volumemode-9342/pvc-6cxjz\" entered phase \"Bound\"\nI0912 13:42:11.570582       1 pv_controller.go:930] claim \"provisioning-1262/pvc-sgfrd\" bound to volume \"local-75pt2\"\nI0912 13:42:11.584689       1 pv_controller.go:879] volume \"local-75pt2\" entered phase \"Bound\"\nI0912 13:42:11.584882       1 pv_controller.go:982] volume \"local-75pt2\" bound to claim \"provisioning-1262/pvc-sgfrd\"\nI0912 13:42:11.597183       1 pv_controller.go:823] claim \"provisioning-1262/pvc-sgfrd\" entered phase \"Bound\"\nI0912 13:42:11.734562       1 namespace_controller.go:185] Namespace has been deleted container-runtime-3957\nI0912 13:42:12.001056       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" need=0 deleting=1\nI0912 13:42:12.001263       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0912 13:42:12.001372       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-pz8sr\"\nI0912 13:42:12.001998       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 0\"\nI0912 13:42:12.017678       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-pz8sr\" objectUID=9c3d28da-1d81-4365-919b-c76a11b4c74d kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:12.020733       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-pz8sr\"\nI0912 13:42:12.028886       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-5ff6986c95-pz8sr\" objectUID=9c3d28da-1d81-4365-919b-c76a11b4c74d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:12.238903       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0afb3a58e59b0d5b3\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:12.245999       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0afb3a58e59b0d5b3\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:12.760796       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-686dff95d9\" need=1 creating=1\nI0912 13:42:12.761215       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 1\"\nI0912 13:42:12.774180       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-zlgtk\"\nE0912 13:42:12.776536       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-6210/pvc-jmpgt: storageclass.storage.k8s.io \"volume-6210\" not found\nI0912 13:42:12.776988       1 event.go:294] \"Event occurred\" object=\"volume-6210/pvc-jmpgt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6210\\\" not found\"\nI0912 13:42:12.783670       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:42:12.829073       1 pv_controller.go:1340] isVolumeReleased[pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9]: volume is released\nI0912 13:42:12.907279       1 pv_controller.go:879] volume \"local-ppjd6\" entered phase \"Available\"\nI0912 13:42:12.941480       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-2572b641-3900-4dd9-9cec-1b21c0d170f9\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-05ba31300f1075e59\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:12.965177       1 event.go:294] \"Event occurred\" object=\"statefulset-8989/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-1 in StatefulSet test-ss successful\"\nI0912 13:42:13.037638       1 pv_controller_base.go:505] deletion of claim \"volume-expand-4577/aws22m9h\" was already processed\nE0912 13:42:13.054256       1 tokens_controller.go:262] error synchronizing serviceaccount crd-webhook-4419/default: secrets \"default-token-q4h66\" is forbidden: unable to create new content in namespace crd-webhook-4419 because it is being terminated\nI0912 13:42:13.155801       1 namespace_controller.go:185] Namespace has been deleted server-version-6937\nE0912 13:42:13.198266       1 tokens_controller.go:262] error synchronizing serviceaccount volumelimits-1503-2763/default: secrets \"default-token-fjwsn\" is forbidden: unable to create new content in namespace volumelimits-1503-2763 because it is being terminated\nI0912 13:42:13.215929       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/verify-service-up-exec-pod-sl22j\" objectUID=c1a8ffa5-90da-45a5-afdf-005fe72390b3 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:13.218359       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-0287415b-22a3-4b71-8e4a-8d1e2ba0d833\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2569^f46cd977-13ce-11ec-b0c3-9e2fd55c13c1\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:42:13.231396       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-0287415b-22a3-4b71-8e4a-8d1e2ba0d833\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2569^f46cd977-13ce-11ec-b0c3-9e2fd55c13c1\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:42:13.282271       1 garbagecollector.go:580] \"Deleting object\" object=\"services-440/verify-service-up-exec-pod-sl22j\" objectUID=c1a8ffa5-90da-45a5-afdf-005fe72390b3 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:13.336365       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:13.396281       1 namespace_controller.go:185] Namespace has been deleted cronjob-8876\nI0912 13:42:13.494400       1 event.go:294] \"Event occurred\" object=\"volumemode-877/awsqkzhw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:42:13.732363       1 event.go:294] \"Event occurred\" object=\"volumemode-877/awsqkzhw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0912 13:42:13.804573       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-0287415b-22a3-4b71-8e4a-8d1e2ba0d833\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-2569^f46cd977-13ce-11ec-b0c3-9e2fd55c13c1\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:42:14.405148       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 2\"\nI0912 13:42:14.406385       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" need=2 deleting=1\nI0912 13:42:14.406552       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-686dff95d9 test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0912 13:42:14.406767       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-6v4xd\"\nI0912 13:42:14.500676       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 2\"\nI0912 13:42:14.501543       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-686dff95d9\" need=2 creating=1\nI0912 13:42:14.520128       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-7bkhk\"\nI0912 13:42:14.526351       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-6v4xd\"\nI0912 13:42:14.527168       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-6v4xd\" objectUID=c1f4761f-d190-4e12-89ef-1c4014b0fc2f kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:14.559055       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-6v4xd\" objectUID=c1f4761f-d190-4e12-89ef-1c4014b0fc2f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:14.905247       1 stateful_set_control.go:555] StatefulSet statefulset-8989/test-ss terminating Pod test-ss-0 for update\nI0912 13:42:14.914458       1 event.go:294] \"Event occurred\" object=\"statefulset-8989/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod test-ss-0 in StatefulSet test-ss successful\"\nE0912 13:42:14.935524       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:16.055607       1 namespace_controller.go:185] Namespace has been deleted ephemeral-2569\nI0912 13:42:16.217585       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7262/awsb6t9f\"\nI0912 13:42:16.226503       1 pv_controller.go:640] volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:16.232539       1 pv_controller.go:879] volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" entered phase \"Released\"\nI0912 13:42:16.234437       1 pv_controller.go:1340] isVolumeReleased[pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36]: volume is released\nI0912 13:42:16.395171       1 event.go:294] \"Event occurred\" object=\"resourcequota-2063/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:42:16.470328       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2569-2613/csi-hostpathplugin-0\" objectUID=738f8cf2-1c20-4896-99fc-b7f2387a7bbc kind=\"Pod\" virtual=false\nI0912 13:42:16.470824       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-2569-2613/csi-hostpathplugin\nI0912 13:42:16.470897       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-2569-2613/csi-hostpathplugin-6bdf4657c5\" objectUID=4cfef308-ddd9-4648-abd9-26785bc335df kind=\"ControllerRevision\" virtual=false\nI0912 13:42:16.472847       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2569-2613/csi-hostpathplugin-6bdf4657c5\" objectUID=4cfef308-ddd9-4648-abd9-26785bc335df kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:42:16.474466       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-2569-2613/csi-hostpathplugin-0\" objectUID=738f8cf2-1c20-4896-99fc-b7f2387a7bbc kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:16.664612       1 namespace_controller.go:185] Namespace has been deleted webhook-552-markers\nE0912 13:42:16.831951       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0912 13:42:16.917142       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:17.101529       1 pv_controller.go:879] volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" entered phase \"Bound\"\nI0912 13:42:17.101832       1 pv_controller.go:982] volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" bound to claim \"volumemode-877/awsqkzhw\"\nI0912 13:42:17.109809       1 pv_controller.go:823] claim \"volumemode-877/awsqkzhw\" entered phase \"Bound\"\nI0912 13:42:17.276479       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-7905/inline-volume-tester-5bgdf\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\"\nI0912 13:42:17.276665       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\"\nI0912 13:42:17.276730       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-7905/inline-volume-tester-5bgdf\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\"\nI0912 13:42:17.276762       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\"\nI0912 13:42:17.319180       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0912 13:42:17.373806       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nE0912 13:42:17.407938       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-4416/default: secrets \"default-token-cdqvv\" is forbidden: unable to create new content in namespace configmap-4416 because it is being terminated\nI0912 13:42:17.564743       1 namespace_controller.go:185] Namespace has been deleted services-4185\nE0912 13:42:17.745193       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-1195/default: secrets \"default-token-j9l8j\" is forbidden: unable to create new content in namespace resourcequota-1195 because it is being terminated\nI0912 13:42:17.777254       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03502e2f3f8ffe3a5\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:17.778338       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-1195/test-quota\nI0912 13:42:18.050432       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\"\nI0912 13:42:18.056402       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf\" objectUID=29147406-d8da-409a-b2c8-b5b9dc5ec2af kind=\"Pod\" virtual=false\nI0912 13:42:18.060916       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf-my-volume-1, uid: d134a2f2-d24b-4caa-8648-928e55e641e5] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af] is deletingDependents\nI0912 13:42:18.061151       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\"\nI0912 13:42:18.060982       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\" objectUID=d134a2f2-d24b-4caa-8648-928e55e641e5 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:42:18.061542       1 pv_controller.go:640] volume \"pvc-5d619c9c-28fa-4025-a354-b7cda6811bcb\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:18.066593       1 pv_controller.go:879] volume \"pvc-5d619c9c-28fa-4025-a354-b7cda6811bcb\" entered phase \"Released\"\nI0912 13:42:18.070423       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905/inline-volume-tester-5bgdf\" objectUID=29147406-d8da-409a-b2c8-b5b9dc5ec2af kind=\"Pod\" virtual=false\nI0912 13:42:18.070867       1 pv_controller.go:1340] isVolumeReleased[pvc-5d619c9c-28fa-4025-a354-b7cda6811bcb]: volume is released\nI0912 13:42:18.074404       1 pv_controller.go:640] volume \"pvc-d134a2f2-d24b-4caa-8648-928e55e641e5\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:18.074439       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-7905, name: inline-volume-tester-5bgdf, uid: 29147406-d8da-409a-b2c8-b5b9dc5ec2af]\nI0912 13:42:18.090064       1 pv_controller.go:879] volume \"pvc-d134a2f2-d24b-4caa-8648-928e55e641e5\" entered phase \"Released\"\nI0912 13:42:18.093777       1 pv_controller_base.go:505] deletion of claim \"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-0\" was already processed\nI0912 13:42:18.102375       1 pv_controller_base.go:505] deletion of claim \"ephemeral-7905/inline-volume-tester-5bgdf-my-volume-1\" was already processed\nI0912 13:42:18.115658       1 namespace_controller.go:185] Namespace has been deleted prestop-6994\nI0912 13:42:18.280304       1 namespace_controller.go:185] Namespace has been deleted job-1961\nW0912 13:42:18.382874       1 reconciler.go:335] Multi-Attach error for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-48-249.eu-central-1.compute.internal and can't be attached to another\nI0912 13:42:18.385829       1 event.go:294] \"Event occurred\" object=\"volume-7955/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"aws-j5cc2\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0912 13:42:18.394472       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0912 13:42:18.408996       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-4419\nE0912 13:42:18.463976       1 tokens_controller.go:262] error synchronizing serviceaccount services-4248/default: secrets \"default-token-h2fnm\" is forbidden: unable to create new content in namespace services-4248 because it is being terminated\nI0912 13:42:18.633212       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0912 13:42:18.633978       1 event.go:294] \"Event occurred\" object=\"resourcequota-2063/test-claim\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:42:18.649574       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"resourcequota-2063/test-claim\"\nE0912 13:42:18.760868       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-4577/default: serviceaccounts \"default\" not found\nI0912 13:42:18.989144       1 namespace_controller.go:185] Namespace has been deleted volumelimits-1503-2763\nI0912 13:42:19.507197       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-5d619c9c-28fa-4025-a354-b7cda6811bcb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7905^15c764b7-13cf-11ec-a3d1-0211898c66d4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:19.509402       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-5d619c9c-28fa-4025-a354-b7cda6811bcb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7905^15c764b7-13cf-11ec-a3d1-0211898c66d4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:19.520634       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-d134a2f2-d24b-4caa-8648-928e55e641e5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7905^15c77726-13cf-11ec-a3d1-0211898c66d4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:19.523379       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-d134a2f2-d24b-4caa-8648-928e55e641e5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7905^15c77726-13cf-11ec-a3d1-0211898c66d4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:19.630108       1 pv_controller.go:1340] isVolumeReleased[pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd]: volume is released\nI0912 13:42:19.640470       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:42:19.641320       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss-1\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-560faf69-c019-4483-b626-794503c4bb94\\\" \"\nI0912 13:42:19.691668       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-acca6ca6-907f-46dd-9e27-129d3fc476dd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0afb3a58e59b0d5b3\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:19.741071       1 namespace_controller.go:185] Namespace has been deleted provisioning-3995\nI0912 13:42:19.785989       1 pv_controller_base.go:505] deletion of claim \"volume-4030/awsbmpgl\" was already processed\nI0912 13:42:20.003473       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 1\"\nI0912 13:42:20.004433       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" need=1 deleting=1\nI0912 13:42:20.004573       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-686dff95d9]\nI0912 13:42:20.004749       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-lkr82\"\nI0912 13:42:20.011631       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-686dff95d9\" need=3 creating=1\nI0912 13:42:20.017288       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-686dff95d9 to 3\"\nI0912 13:42:20.028802       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-686dff95d9-zzdb7\"\nI0912 13:42:20.036480       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-lkr82\" objectUID=6c58e0b4-5da1-4300-b05f-c10f13dd5170 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:20.039788       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-lkr82\"\nI0912 13:42:20.042696       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-d134a2f2-d24b-4caa-8648-928e55e641e5\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7905^15c77726-13cf-11ec-a3d1-0211898c66d4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:20.057648       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03502e2f3f8ffe3a5\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:20.058055       1 event.go:294] \"Event occurred\" object=\"volumemode-877/pod-7a841024-5680-4458-970a-f13f26d6ddd4\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\\\" \"\nI0912 13:42:20.062743       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-lkr82\" objectUID=6c58e0b4-5da1-4300-b05f-c10f13dd5170 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:20.068345       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-5d619c9c-28fa-4025-a354-b7cda6811bcb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-7905^15c764b7-13cf-11ec-a3d1-0211898c66d4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:20.068740       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:42:20.068868       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint deployment-5161/test-rolling-update-with-lb: Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0912 13:42:20.500514       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:20.604804       1 garbagecollector.go:471] \"Processing object\" object=\"dns-8008/e2e-dns-utils\" objectUID=fe942b2f-bba5-49ff-8adf-95602e6c7c0a kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:20.615684       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-8008/e2e-dns-utils\" objectUID=fe942b2f-bba5-49ff-8adf-95602e6c7c0a kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:20.725557       1 garbagecollector.go:471] \"Processing object\" object=\"dns-8008/e2e-configmap-dns-server-790923d7-6df7-4dc3-a8a9-ac2f5b8da16b\" objectUID=58012e24-093c-49c5-a33b-1d8e143ecaed kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:20.731182       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-8008/e2e-configmap-dns-server-790923d7-6df7-4dc3-a8a9-ac2f5b8da16b\" objectUID=58012e24-093c-49c5-a33b-1d8e143ecaed kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:21.358590       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:21.682568       1 namespace_controller.go:185] Namespace has been deleted pods-920\nI0912 13:42:21.827459       1 event.go:294] \"Event occurred\" object=\"statefulset-8989/test-ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod test-ss-0 in StatefulSet test-ss successful\"\nE0912 13:42:21.911049       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-2569-2613/default: secrets \"default-token-2nds6\" is forbidden: unable to create new content in namespace ephemeral-2569-2613 because it is being terminated\nI0912 13:42:22.275018       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:22.282157       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:22.284234       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:22.291547       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nE0912 13:42:22.302733       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:22.540363       1 namespace_controller.go:185] Namespace has been deleted configmap-4416\nI0912 13:42:22.851410       1 namespace_controller.go:185] Namespace has been deleted resourcequota-1195\nE0912 13:42:23.063297       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:23.634032       1 garbagecollector.go:471] \"Processing object\" object=\"dns-1461/dns-test-430617dc-abb6-4468-bf71-4eb4112a6d1a\" objectUID=fa578b62-f69c-4e5c-a2c4-f43246699384 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:23.641668       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-1461/dns-test-430617dc-abb6-4468-bf71-4eb4112a6d1a\" objectUID=fa578b62-f69c-4e5c-a2c4-f43246699384 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:23.670262       1 namespace_controller.go:185] Namespace has been deleted services-4248\nI0912 13:42:23.748444       1 garbagecollector.go:471] \"Processing object\" object=\"dns-1461/dns-test-service-2-fcbt8\" objectUID=386c4ffa-5fe4-45b7-9d6e-f4666f21ae9f kind=\"EndpointSlice\" virtual=false\nI0912 13:42:23.752128       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-1461/dns-test-service-2-fcbt8\" objectUID=386c4ffa-5fe4-45b7-9d6e-f4666f21ae9f kind=\"EndpointSlice\" propagationPolicy=Background\nI0912 13:42:23.853049       1 event.go:294] \"Event occurred\" object=\"provisioning-9829-8662/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:42:23.858844       1 namespace_controller.go:185] Namespace has been deleted volume-expand-4577\nI0912 13:42:23.876581       1 namespace_controller.go:185] Namespace has been deleted downward-api-74\nI0912 13:42:24.200595       1 event.go:294] \"Event occurred\" object=\"provisioning-9829/csi-hostpathf2g4s\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9829\\\" or manually created by system administrator\"\nI0912 13:42:24.281981       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-4989/pvc-dmkt5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4989\\\" or manually created by system administrator\"\nI0912 13:42:24.300379       1 pv_controller.go:879] volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" entered phase \"Bound\"\nI0912 13:42:24.300415       1 pv_controller.go:982] volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" bound to claim \"csi-mock-volumes-4989/pvc-dmkt5\"\nI0912 13:42:24.311513       1 pv_controller.go:823] claim \"csi-mock-volumes-4989/pvc-dmkt5\" entered phase \"Bound\"\nE0912 13:42:24.379352       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-3768/pvc-6sgtb: storageclass.storage.k8s.io \"volume-3768\" not found\nI0912 13:42:24.380063       1 event.go:294] \"Event occurred\" object=\"volume-3768/pvc-6sgtb\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-3768\\\" not found\"\nI0912 13:42:24.494920       1 pv_controller.go:879] volume \"local-6h89g\" entered phase \"Available\"\nE0912 13:42:24.690477       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:24.703069       1 utils.go:366] couldn't find ipfamilies for headless service: services-978/externalname-service likely because controller manager is likely connected to an old apiserver that does not support ip families yet. The service endpoint slice will use dual stack families until api-server default it correctly\nI0912 13:42:24.808780       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4989^4\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nE0912 13:42:24.939418       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-7905/default: secrets \"default-token-ckczj\" is forbidden: unable to create new content in namespace ephemeral-7905 because it is being terminated\nW0912 13:42:25.035630       1 utils.go:265] Service services-440/service-headless-toggled using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:42:25.051490       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-978/externalname-service\" need=2 creating=2\nI0912 13:42:25.060708       1 event.go:294] \"Event occurred\" object=\"services-978/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-vs5pg\"\nI0912 13:42:25.074812       1 event.go:294] \"Event occurred\" object=\"services-978/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-8nhwj\"\nI0912 13:42:25.321514       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4989^4\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:25.321640       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-4989/pvc-volume-tester-q5f8j\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\\\" \"\nE0912 13:42:25.515639       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-6328/inline-volume-5hhq6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0912 13:42:25.516109       1 event.go:294] \"Event occurred\" object=\"ephemeral-6328/inline-volume-5hhq6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0912 13:42:25.532196       1 garbagecollector.go:471] \"Processing object\" object=\"prestop-6855/server\" objectUID=9af81524-8489-4af6-af7d-cca2bccff33a kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:25.535948       1 garbagecollector.go:580] \"Deleting object\" object=\"prestop-6855/server\" objectUID=9af81524-8489-4af6-af7d-cca2bccff33a kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:25.686298       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-3335/pvc-f2g7s: storageclass.storage.k8s.io \"provisioning-3335\" not found\nI0912 13:42:25.686559       1 event.go:294] \"Event occurred\" object=\"provisioning-3335/pvc-f2g7s\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3335\\\" not found\"\nI0912 13:42:25.800294       1 pv_controller.go:879] volume \"local-cjxt9\" entered phase \"Available\"\nI0912 13:42:25.842375       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-6328, name: inline-volume-5hhq6, uid: 03d41687-6259-43a8-9fa7-106d142a6f7b] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0912 13:42:25.842433       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-5hhq6-my-volume\" objectUID=70d9d5b9-109d-4dc6-ab7d-dacdcbf3c436 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:42:25.842771       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-5hhq6\" objectUID=03d41687-6259-43a8-9fa7-106d142a6f7b kind=\"Pod\" virtual=false\nI0912 13:42:25.844767       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-6328, name: inline-volume-5hhq6-my-volume, uid: 70d9d5b9-109d-4dc6-ab7d-dacdcbf3c436] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6328, name: inline-volume-5hhq6, uid: 03d41687-6259-43a8-9fa7-106d142a6f7b] is deletingDependents\nI0912 13:42:25.846085       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6328/inline-volume-5hhq6-my-volume\" objectUID=70d9d5b9-109d-4dc6-ab7d-dacdcbf3c436 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0912 13:42:25.848968       1 pv_controller.go:1451] error finding provisioning plugin for claim ephemeral-6328/inline-volume-5hhq6-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0912 13:42:25.849020       1 event.go:294] \"Event occurred\" object=\"ephemeral-6328/inline-volume-5hhq6-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0912 13:42:25.849581       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-5hhq6-my-volume\" objectUID=70d9d5b9-109d-4dc6-ab7d-dacdcbf3c436 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:42:25.851605       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-6328/inline-volume-5hhq6-my-volume\"\nI0912 13:42:25.855893       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-5hhq6\" objectUID=03d41687-6259-43a8-9fa7-106d142a6f7b kind=\"Pod\" virtual=false\nI0912 13:42:25.857326       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-6328, name: inline-volume-5hhq6, uid: 03d41687-6259-43a8-9fa7-106d142a6f7b]\nI0912 13:42:26.094567       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-2063/test-quota\nE0912 13:42:26.122680       1 tokens_controller.go:262] error synchronizing serviceaccount dns-8008/default: secrets \"default-token-nl47v\" is forbidden: unable to create new content in namespace dns-8008 because it is being terminated\nI0912 13:42:26.546900       1 pv_controller.go:930] claim \"volume-3768/pvc-6sgtb\" bound to volume \"local-6h89g\"\nI0912 13:42:26.551990       1 pv_controller.go:1340] isVolumeReleased[pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36]: volume is released\nI0912 13:42:26.560287       1 pv_controller.go:879] volume \"local-6h89g\" entered phase \"Bound\"\nI0912 13:42:26.560508       1 pv_controller.go:982] volume \"local-6h89g\" bound to claim \"volume-3768/pvc-6sgtb\"\nI0912 13:42:26.571416       1 pv_controller.go:823] claim \"volume-3768/pvc-6sgtb\" entered phase \"Bound\"\nI0912 13:42:26.571833       1 pv_controller.go:930] claim \"volume-6210/pvc-jmpgt\" bound to volume \"local-ppjd6\"\nI0912 13:42:26.572390       1 event.go:294] \"Event occurred\" object=\"provisioning-9829/csi-hostpathf2g4s\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9829\\\" or manually created by system administrator\"\nI0912 13:42:26.586404       1 pv_controller.go:879] volume \"local-ppjd6\" entered phase \"Bound\"\nI0912 13:42:26.586538       1 pv_controller.go:982] volume \"local-ppjd6\" bound to claim \"volume-6210/pvc-jmpgt\"\nI0912 13:42:26.593914       1 pv_controller.go:823] claim \"volume-6210/pvc-jmpgt\" entered phase \"Bound\"\nI0912 13:42:26.594293       1 pv_controller.go:930] claim \"provisioning-3335/pvc-f2g7s\" bound to volume \"local-cjxt9\"\nI0912 13:42:26.605093       1 pv_controller.go:879] volume \"local-cjxt9\" entered phase \"Bound\"\nI0912 13:42:26.605121       1 pv_controller.go:982] volume \"local-cjxt9\" bound to claim \"provisioning-3335/pvc-f2g7s\"\nI0912 13:42:26.611849       1 pv_controller.go:823] claim \"provisioning-3335/pvc-f2g7s\" entered phase \"Bound\"\nI0912 13:42:26.631555       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" need=0 deleting=1\nI0912 13:42:26.632108       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" relatedReplicaSets=[test-rolling-update-with-lb-59c4fc87b4 test-rolling-update-with-lb-686dff95d9 test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0912 13:42:26.632070       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-59c4fc87b4 to 0\"\nI0912 13:42:26.632418       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-rolling-update-with-lb-59c4fc87b4\" pod=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-5jkfg\"\nI0912 13:42:26.640148       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:42:26.645941       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-5jkfg\" objectUID=098eb960-1fc1-468f-a10b-2b99b892e40b kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:26.647302       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-59c4fc87b4-5jkfg\"\nI0912 13:42:26.664976       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-59c4fc87b4-5jkfg\" objectUID=098eb960-1fc1-468f-a10b-2b99b892e40b kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:26.958662       1 namespace_controller.go:185] Namespace has been deleted ephemeral-2569-2613\nI0912 13:42:27.721689       1 resource_quota_controller.go:435] syncing resource quota controller with updated resources from discovery: added: [], removed: [resourcequota.example.com/v1, Resource=e2e-test-resourcequota-7437-crds]\nI0912 13:42:27.721974       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0912 13:42:27.722021       1 shared_informer.go:247] Caches are synced for resource quota \nI0912 13:42:27.722031       1 resource_quota_controller.go:454] synced quota controller\nI0912 13:42:28.156795       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [], removed: [resourcequota.example.com/v1, Resource=e2e-test-resourcequota-7437-crds]\nI0912 13:42:28.156899       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0912 13:42:28.156978       1 shared_informer.go:247] Caches are synced for garbage collector \nI0912 13:42:28.156990       1 garbagecollector.go:254] synced garbage collector\nI0912 13:42:28.373988       1 namespace_controller.go:185] Namespace has been deleted configmap-1250\nI0912 13:42:28.502678       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-231/csi-hostpathln78g\"\nI0912 13:42:28.510004       1 pv_controller.go:640] volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:28.520608       1 pv_controller.go:879] volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" entered phase \"Released\"\nI0912 13:42:28.533672       1 pv_controller_base.go:505] deletion of claim \"provisioning-231/csi-hostpathln78g\" was already processed\nI0912 13:42:29.017849       1 pv_controller.go:1340] isVolumeReleased[pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36]: volume is released\nI0912 13:42:29.192425       1 pv_controller_base.go:505] deletion of claim \"provisioning-7262/awsb6t9f\" was already processed\nI0912 13:42:29.282513       1 pv_controller.go:879] volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" entered phase \"Bound\"\nI0912 13:42:29.282676       1 pv_controller.go:982] volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" bound to claim \"provisioning-9829/csi-hostpathf2g4s\"\nI0912 13:42:29.292348       1 pv_controller.go:823] claim \"provisioning-9829/csi-hostpathf2g4s\" entered phase \"Bound\"\nI0912 13:42:29.745941       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-24f6bd12-93cf-4446-a8bf-76895ee42a36\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0adcdad1f9c6e9d96\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:29.762526       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-9342/pvc-6cxjz\"\nI0912 13:42:29.774344       1 pv_controller.go:640] volume \"local-xf52x\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:42:29.780623       1 pv_controller.go:879] volume \"local-xf52x\" entered phase \"Released\"\nE0912 13:42:29.860464       1 tokens_controller.go:262] error synchronizing serviceaccount projected-3892/default: secrets \"default-token-kjjww\" is forbidden: unable to create new content in namespace projected-3892 because it is being terminated\nI0912 13:42:29.885479       1 pv_controller_base.go:505] deletion of claim \"volumemode-9342/pvc-6cxjz\" was already processed\nI0912 13:42:30.204006       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905-6080/csi-hostpathplugin-7cc88f9748\" objectUID=cba55982-0ae1-4002-b28b-e040a7c224f1 kind=\"ControllerRevision\" virtual=false\nI0912 13:42:30.204356       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-7905-6080/csi-hostpathplugin\nI0912 13:42:30.204448       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-7905-6080/csi-hostpathplugin-0\" objectUID=9ea33431-6718-4914-96ca-e36da46bd621 kind=\"Pod\" virtual=false\nI0912 13:42:30.222601       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7905-6080/csi-hostpathplugin-7cc88f9748\" objectUID=cba55982-0ae1-4002-b28b-e040a7c224f1 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:42:30.222928       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-7905-6080/csi-hostpathplugin-0\" objectUID=9ea33431-6718-4914-96ca-e36da46bd621 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:30.229394       1 namespace_controller.go:185] Namespace has been deleted ephemeral-7905\nE0912 13:42:30.794919       1 tokens_controller.go:262] error synchronizing serviceaccount prestop-6855/default: secrets \"default-token-xkpl5\" is forbidden: unable to create new content in namespace prestop-6855 because it is being terminated\nI0912 13:42:30.964724       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0912 13:42:30.998657       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9829^45d5e5ff-13cf-11ec-a6d1-6aa257ffd746\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:30.998696       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:31.169334       1 event.go:294] \"Event occurred\" object=\"ephemeral-6328-4455/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:42:31.183811       1 namespace_controller.go:185] Namespace has been deleted resourcequota-2063\nI0912 13:42:31.340044       1 namespace_controller.go:185] Namespace has been deleted dns-8008\nI0912 13:42:31.493770       1 event.go:294] \"Event occurred\" object=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForPodScheduled\" message=\"waiting for pod inline-volume-tester-jd5cl to be scheduled\"\nI0912 13:42:31.527212       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9829^45d5e5ff-13cf-11ec-a6d1-6aa257ffd746\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:31.527971       1 event.go:294] \"Event occurred\" object=\"provisioning-9829/pod-subpath-test-dynamicpv-ms2b\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\\\" \"\nE0912 13:42:31.850972       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:32.327671       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-231^3770b608-13cf-11ec-b2b4-8286f03a97cf\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:32.329853       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-231^3770b608-13cf-11ec-b2b4-8286f03a97cf\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:32.860034       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-30ef71fa-c1e3-4f5f-8547-8633795aad99\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-231^3770b608-13cf-11ec-b2b4-8286f03a97cf\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:32.885488       1 event.go:294] \"Event occurred\" object=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-6328\\\" or manually created by system administrator\"\nI0912 13:42:33.276772       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:33.276907       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\\\" \"\nI0912 13:42:33.668490       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-1262/pvc-sgfrd\"\nI0912 13:42:33.694317       1 pv_controller.go:640] volume \"local-75pt2\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:42:33.701018       1 pv_controller.go:879] volume \"local-75pt2\" entered phase \"Released\"\nI0912 13:42:33.784462       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-8989/test-ss-5cf9766999\" objectUID=0b9dc494-e0e3-4023-801b-42ec4b6cc9d5 kind=\"ControllerRevision\" virtual=false\nI0912 13:42:33.785121       1 stateful_set.go:440] StatefulSet has been deleted statefulset-8989/test-ss\nI0912 13:42:33.785209       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-8989/test-ss-1\" objectUID=899e3d48-da46-47f7-96f0-392e7f3a37a3 kind=\"Pod\" virtual=false\nI0912 13:42:33.785278       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-8989/test-ss-7579f76666\" objectUID=e7f24780-feba-40c6-9fac-717eee2c5d42 kind=\"ControllerRevision\" virtual=false\nI0912 13:42:33.785293       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-8989/test-ss-0\" objectUID=a2b2a5fa-a08d-46a3-8049-0d40e65c7828 kind=\"Pod\" virtual=false\nI0912 13:42:33.804816       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-8989/test-ss-0\" objectUID=a2b2a5fa-a08d-46a3-8049-0d40e65c7828 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:33.804793       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-8989/test-ss-5cf9766999\" objectUID=0b9dc494-e0e3-4023-801b-42ec4b6cc9d5 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:42:33.805330       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-8989/test-ss-1\" objectUID=899e3d48-da46-47f7-96f0-392e7f3a37a3 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:33.805415       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-8989/test-ss-7579f76666\" objectUID=e7f24780-feba-40c6-9fac-717eee2c5d42 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:42:33.813063       1 pv_controller_base.go:505] deletion of claim \"provisioning-1262/pvc-sgfrd\" was already processed\nE0912 13:42:33.998045       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-231/default: secrets \"default-token-6hlhf\" is forbidden: unable to create new content in namespace provisioning-231 because it is being terminated\nI0912 13:42:34.310957       1 namespace_controller.go:185] Namespace has been deleted dns-1461\nI0912 13:42:34.311405       1 garbagecollector.go:471] \"Processing object\" object=\"dns-3694/test-dns-nameservers\" objectUID=d04b6f9f-c34c-4dcc-9728-a1d4f81ff8ea kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:34.314492       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-3694/test-dns-nameservers\" objectUID=d04b6f9f-c34c-4dcc-9728-a1d4f81ff8ea kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:34.391329       1 namespace_controller.go:185] Namespace has been deleted volume-4030\nI0912 13:42:34.983076       1 namespace_controller.go:185] Namespace has been deleted projected-3892\nE0912 13:42:35.270201       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-6329/default: secrets \"default-token-f9fbb\" is forbidden: unable to create new content in namespace provisioning-6329 because it is being terminated\nI0912 13:42:35.330646       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-6210/pvc-jmpgt\"\nI0912 13:42:35.340799       1 event.go:294] \"Event occurred\" object=\"statefulset-5664/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0912 13:42:35.365042       1 pv_controller.go:640] volume \"local-ppjd6\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:42:35.374581       1 pv_controller.go:879] volume \"local-ppjd6\" entered phase \"Released\"\nI0912 13:42:35.445219       1 pv_controller_base.go:505] deletion of claim \"volume-6210/pvc-jmpgt\" was already processed\nE0912 13:42:35.605524       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:35.910000       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:35.966957       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nE0912 13:42:36.082870       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:36.257646       1 pv_controller.go:879] volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" entered phase \"Bound\"\nI0912 13:42:36.257749       1 pv_controller.go:982] volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" bound to claim \"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\"\nI0912 13:42:36.268409       1 pv_controller.go:823] claim \"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" entered phase \"Bound\"\nE0912 13:42:36.323623       1 pv_controller.go:1451] error finding provisioning plugin for claim volume-2598/pvc-24dgn: storageclass.storage.k8s.io \"volume-2598\" not found\nI0912 13:42:36.323697       1 event.go:294] \"Event occurred\" object=\"volume-2598/pvc-24dgn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-2598\\\" not found\"\nI0912 13:42:36.504705       1 pv_controller.go:879] volume \"aws-lmp8z\" entered phase \"Available\"\nE0912 13:42:36.964800       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-9342/default: serviceaccounts \"default\" not found\nI0912 13:42:36.980541       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6328^4a026bd3-13cf-11ec-ba0e-867ea978be8d\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nE0912 13:42:37.043173       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7262/default: secrets \"default-token-75j96\" is forbidden: unable to create new content in namespace provisioning-7262 because it is being terminated\nI0912 13:42:37.520272       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6328^4a026bd3-13cf-11ec-ba0e-867ea978be8d\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:37.520504       1 event.go:294] \"Event occurred\" object=\"ephemeral-6328/inline-volume-tester-jd5cl\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\\\" \"\nI0912 13:42:37.667121       1 event.go:294] \"Event occurred\" object=\"topology-4678/pvc-hfzmf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:42:37.788913       1 event.go:294] \"Event occurred\" object=\"topology-4678/pvc-hfzmf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nE0912 13:42:37.872194       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-559/pvc-c67zh: storageclass.storage.k8s.io \"provisioning-559\" not found\nI0912 13:42:37.872593       1 event.go:294] \"Event occurred\" object=\"provisioning-559/pvc-c67zh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-559\\\" not found\"\nI0912 13:42:37.994067       1 pv_controller.go:879] volume \"local-29pwz\" entered phase \"Available\"\nI0912 13:42:38.221282       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:42:38.221657       1 event.go:294] \"Event occurred\" object=\"volume-7955/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-j5cc2\\\" \"\nI0912 13:42:38.989335       1 namespace_controller.go:185] Namespace has been deleted apply-2653\nI0912 13:42:39.101979       1 namespace_controller.go:185] Namespace has been deleted provisioning-231\nI0912 13:42:39.155890       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-231-2257/csi-hostpathplugin-74bd84f4d7\" objectUID=f7586133-7323-4e4a-b192-7be326998ef0 kind=\"ControllerRevision\" virtual=false\nI0912 13:42:39.157051       1 stateful_set.go:440] StatefulSet has been deleted provisioning-231-2257/csi-hostpathplugin\nI0912 13:42:39.157099       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-231-2257/csi-hostpathplugin-0\" objectUID=bd965159-13ae-40ae-b83d-c35ae65f0cd4 kind=\"Pod\" virtual=false\nI0912 13:42:39.163930       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-231-2257/csi-hostpathplugin-0\" objectUID=bd965159-13ae-40ae-b83d-c35ae65f0cd4 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:39.164079       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-231-2257/csi-hostpathplugin-74bd84f4d7\" objectUID=f7586133-7323-4e4a-b192-7be326998ef0 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:42:39.652252       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4989^4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:39.655435       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4989^4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nE0912 13:42:39.684308       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-8989/default: secrets \"default-token-7jgh6\" is forbidden: unable to create new content in namespace statefulset-8989 because it is being terminated\nI0912 13:42:39.732822       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-8989/test-kd45h\" objectUID=66aac12e-0f70-4271-9c9d-52c1feb9570b kind=\"EndpointSlice\" virtual=false\nI0912 13:42:39.737072       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-8989/test-kd45h\" objectUID=66aac12e-0f70-4271-9c9d-52c1feb9570b kind=\"EndpointSlice\" propagationPolicy=Background\nE0912 13:42:39.777569       1 tokens_controller.go:262] error synchronizing serviceaccount dns-3694/default: secrets \"default-token-6dg8g\" is forbidden: unable to create new content in namespace dns-3694 because it is being terminated\nI0912 13:42:39.937837       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-3335/pvc-f2g7s\"\nI0912 13:42:39.944454       1 pv_controller.go:640] volume \"local-cjxt9\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:42:39.948338       1 pv_controller.go:879] volume \"local-cjxt9\" entered phase \"Released\"\nI0912 13:42:40.057610       1 pv_controller_base.go:505] deletion of claim \"provisioning-3335/pvc-f2g7s\" was already processed\nI0912 13:42:40.208664       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-4989^4\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:40.397981       1 namespace_controller.go:185] Namespace has been deleted provisioning-6329\nI0912 13:42:40.725314       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-4989/pvc-dmkt5\"\nI0912 13:42:40.733042       1 pv_controller.go:640] volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:40.736076       1 pv_controller.go:879] volume \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\" entered phase \"Released\"\nI0912 13:42:40.743984       1 pv_controller.go:1340] isVolumeReleased[pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39]: volume is released\nE0912 13:42:40.758380       1 pv_protection_controller.go:118] PV pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39 failed with : Operation cannot be fulfilled on persistentvolumes \"pvc-e94ec0bb-e2b2-45f9-9552-540d8823fb39\": the object has been modified; please apply your changes to the latest version and try again\nI0912 13:42:40.761560       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-4989/pvc-dmkt5\" was already processed\nI0912 13:42:40.819296       1 namespace_controller.go:185] Namespace has been deleted ephemeral-7905-6080\nI0912 13:42:41.039850       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-2065/liveness-33c1edff-90e2-451c-bbca-4e25b25ae0bc\" objectUID=ac178748-1d3d-4d32-bcbf-4f2526f5aabc kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:41.043631       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-2065/liveness-33c1edff-90e2-451c-bbca-4e25b25ae0bc\" objectUID=ac178748-1d3d-4d32-bcbf-4f2526f5aabc kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:41.131825       1 pv_controller.go:879] volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" entered phase \"Bound\"\nI0912 13:42:41.132048       1 pv_controller.go:982] volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" bound to claim \"topology-4678/pvc-hfzmf\"\nI0912 13:42:41.148035       1 pv_controller.go:823] claim \"topology-4678/pvc-hfzmf\" entered phase \"Bound\"\nI0912 13:42:41.231112       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0912 13:42:41.273784       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03502e2f3f8ffe3a5\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:41.280256       1 garbagecollector.go:471] \"Processing object\" object=\"kubectl-3520/httpd\" objectUID=2fb128fd-f214-4458-8e58-d264d08eda8e kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:41.283330       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03502e2f3f8ffe3a5\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:41.286232       1 garbagecollector.go:580] \"Deleting object\" object=\"kubectl-3520/httpd\" objectUID=2fb128fd-f214-4458-8e58-d264d08eda8e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:41.463205       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0912 13:42:41.548506       1 pv_controller.go:930] claim \"provisioning-559/pvc-c67zh\" bound to volume \"local-29pwz\"\nE0912 13:42:41.562734       1 tokens_controller.go:262] error synchronizing serviceaccount volume-6210/default: secrets \"default-token-qsdqx\" is forbidden: unable to create new content in namespace volume-6210 because it is being terminated\nI0912 13:42:41.570682       1 pv_controller.go:879] volume \"local-29pwz\" entered phase \"Bound\"\nI0912 13:42:41.570716       1 pv_controller.go:982] volume \"local-29pwz\" bound to claim \"provisioning-559/pvc-c67zh\"\nI0912 13:42:41.579646       1 pv_controller.go:823] claim \"provisioning-559/pvc-c67zh\" entered phase \"Bound\"\nI0912 13:42:41.579998       1 pv_controller.go:930] claim \"volume-2598/pvc-24dgn\" bound to volume \"aws-lmp8z\"\nI0912 13:42:41.594759       1 pv_controller.go:879] volume \"aws-lmp8z\" entered phase \"Bound\"\nI0912 13:42:41.594998       1 pv_controller.go:982] volume \"aws-lmp8z\" bound to claim \"volume-2598/pvc-24dgn\"\nI0912 13:42:41.606562       1 pv_controller.go:823] claim \"volume-2598/pvc-24dgn\" entered phase \"Bound\"\nI0912 13:42:41.792145       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01ceb9189da0217b0\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:42.021734       1 namespace_controller.go:185] Namespace has been deleted volumemode-9342\nE0912 13:42:42.089516       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-2-5982/default: secrets \"default-token-vqthb\" is forbidden: unable to create new content in namespace disruption-2-5982 because it is being terminated\nE0912 13:42:42.251563       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-4261/default: secrets \"default-token-t5b5p\" is forbidden: unable to create new content in namespace disruption-4261 because it is being terminated\nI0912 13:42:42.268015       1 namespace_controller.go:185] Namespace has been deleted provisioning-7262\nI0912 13:42:42.481819       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-9829/csi-hostpathf2g4s\"\nI0912 13:42:42.487375       1 pv_controller.go:640] volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:42.491092       1 pv_controller.go:879] volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" entered phase \"Released\"\nI0912 13:42:42.499306       1 pv_controller.go:1340] isVolumeReleased[pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb]: volume is released\nI0912 13:42:42.520904       1 pv_controller_base.go:505] deletion of claim \"provisioning-9829/csi-hostpathf2g4s\" was already processed\nI0912 13:42:43.416932       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:43.632042       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-877/awsqkzhw\"\nI0912 13:42:43.639892       1 pv_controller.go:640] volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:42:43.644169       1 pv_controller.go:879] volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" entered phase \"Released\"\nI0912 13:42:43.651107       1 pv_controller.go:1340] isVolumeReleased[pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d]: volume is released\nE0912 13:42:43.999159       1 pv_controller.go:1451] error finding provisioning plugin for claim volumemode-350/pvc-mzv5d: storageclass.storage.k8s.io \"volumemode-350\" not found\nI0912 13:42:43.999297       1 event.go:294] \"Event occurred\" object=\"volumemode-350/pvc-mzv5d\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-350\\\" not found\"\nI0912 13:42:44.107328       1 event.go:294] \"Event occurred\" object=\"topology-4678/pod-b82eef41-00a4-4767-8186-dfa6166cbfa9\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\\\" \"\nI0912 13:42:44.107361       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01ceb9189da0217b0\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:44.177167       1 pv_controller.go:879] volume \"aws-5p2ln\" entered phase \"Available\"\nI0912 13:42:44.430939       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/verify-service-up-exec-pod-thg9w\" objectUID=4be47154-499b-4988-bff0-46b761a51e84 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:44.457479       1 garbagecollector.go:580] \"Deleting object\" object=\"services-440/verify-service-up-exec-pod-thg9w\" objectUID=4be47154-499b-4988-bff0-46b761a51e84 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:44.568194       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-231-2257/default: secrets \"default-token-zw5xv\" is forbidden: unable to create new content in namespace provisioning-231-2257 because it is being terminated\nI0912 13:42:44.821697       1 namespace_controller.go:185] Namespace has been deleted statefulset-8989\nI0912 13:42:44.859771       1 namespace_controller.go:185] Namespace has been deleted dns-3694\nI0912 13:42:45.260007       1 namespace_controller.go:185] Namespace has been deleted provisioning-1262\nI0912 13:42:45.723671       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:45.723863       1 event.go:294] \"Event occurred\" object=\"volume-2598/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-lmp8z\\\" \"\nI0912 13:42:45.990924       1 event.go:294] \"Event occurred\" object=\"statefulset-5664/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nE0912 13:42:46.377003       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-2065/default: secrets \"default-token-2w46p\" is forbidden: unable to create new content in namespace container-probe-2065 because it is being terminated\nE0912 13:42:46.647917       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:46.657163       1 stateful_set_control.go:521] StatefulSet statefulset-5664/ss terminating Pod ss-1 for scale down\nI0912 13:42:46.661507       1 event.go:294] \"Event occurred\" object=\"statefulset-5664/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0912 13:42:46.687484       1 namespace_controller.go:185] Namespace has been deleted health-9231\nI0912 13:42:46.757046       1 namespace_controller.go:185] Namespace has been deleted volume-6210\nI0912 13:42:47.056790       1 event.go:294] \"Event occurred\" object=\"volume-expand-1710-6821/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:42:47.204645       1 namespace_controller.go:185] Namespace has been deleted disruption-2-5982\nI0912 13:42:47.300844       1 namespace_controller.go:185] Namespace has been deleted disruption-4261\nI0912 13:42:47.377000       1 event.go:294] \"Event occurred\" object=\"volume-expand-1710/csi-hostpathckcll\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-1710\\\" or manually created by system administrator\"\nE0912 13:42:47.458308       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-3155/default: secrets \"default-token-4sc2c\" is forbidden: unable to create new content in namespace configmap-3155 because it is being terminated\nI0912 13:42:47.572475       1 garbagecollector.go:471] \"Processing object\" object=\"services-978/externalname-service-nc5d9\" objectUID=5a0eafd4-74f6-4f19-b2eb-94b567b9ce01 kind=\"EndpointSlice\" virtual=false\nI0912 13:42:47.575078       1 garbagecollector.go:580] \"Deleting object\" object=\"services-978/externalname-service-nc5d9\" objectUID=5a0eafd4-74f6-4f19-b2eb-94b567b9ce01 kind=\"EndpointSlice\" propagationPolicy=Background\nE0912 13:42:47.861209       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3335/default: secrets \"default-token-cmg5k\" is forbidden: unable to create new content in namespace provisioning-3335 because it is being terminated\nE0912 13:42:47.895898       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9829/default: secrets \"default-token-rbqdp\" is forbidden: unable to create new content in namespace provisioning-9829 because it is being terminated\nI0912 13:42:48.017609       1 pv_controller.go:1340] isVolumeReleased[pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d]: volume is released\nI0912 13:42:48.158266       1 pv_controller_base.go:505] deletion of claim \"volumemode-877/awsqkzhw\" was already processed\nE0912 13:42:48.455018       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:48.712474       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-01fe4c8d-5337-4e78-81c7-bea7ecbecd1d\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-03502e2f3f8ffe3a5\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nE0912 13:42:48.715411       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-4155/pvc-zptps: storageclass.storage.k8s.io \"provisioning-4155\" not found\nI0912 13:42:48.715889       1 event.go:294] \"Event occurred\" object=\"provisioning-4155/pvc-zptps\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4155\\\" not found\"\nI0912 13:42:48.840547       1 pv_controller.go:879] volume \"local-mlk9v\" entered phase \"Available\"\nE0912 13:42:49.052029       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:42:49.205451       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:42:49.378443       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:42:49.526759       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nI0912 13:42:49.644522       1 pv_controller.go:879] volume \"pvc-0eb889a3-6f45-4c51-8c81-ac4edcce5a4d\" entered phase \"Bound\"\nI0912 13:42:49.644907       1 pv_controller.go:982] volume \"pvc-0eb889a3-6f45-4c51-8c81-ac4edcce5a4d\" bound to claim \"volume-expand-1710/csi-hostpathckcll\"\nI0912 13:42:49.652054       1 namespace_controller.go:185] Namespace has been deleted provisioning-231-2257\nI0912 13:42:49.660584       1 pv_controller.go:823] claim \"volume-expand-1710/csi-hostpathckcll\" entered phase \"Bound\"\nI0912 13:42:49.737745       1 stateful_set_control.go:521] StatefulSet statefulset-5664/ss terminating Pod ss-0 for scale down\nI0912 13:42:49.754034       1 event.go:294] \"Event occurred\" object=\"statefulset-5664/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nE0912 13:42:49.792038       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:42:50.032485       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nI0912 13:42:50.282968       1 namespace_controller.go:185] Namespace has been deleted cronjob-193\nI0912 13:42:50.456469       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-6d6f88f74f\" objectUID=830a4e1f-ee0e-487c-a622-220701fc28b5 kind=\"ControllerRevision\" virtual=false\nI0912 13:42:50.457099       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4989-6272/csi-mockplugin\nI0912 13:42:50.457256       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-0\" objectUID=7a8837e3-40ba-4c5e-9f87-929d2760006b kind=\"Pod\" virtual=false\nI0912 13:42:50.466604       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-0\" objectUID=7a8837e3-40ba-4c5e-9f87-929d2760006b kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:50.466920       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-6d6f88f74f\" objectUID=830a4e1f-ee0e-487c-a622-220701fc28b5 kind=\"ControllerRevision\" propagationPolicy=Background\nE0912 13:42:50.496618       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nI0912 13:42:50.515620       1 namespace_controller.go:185] Namespace has been deleted disruption-1656\nI0912 13:42:50.708599       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-attacher-6c666b94cc\" objectUID=df3a7c5f-41c2-45e3-8990-27b8caf2334e kind=\"ControllerRevision\" virtual=false\nI0912 13:42:50.708808       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-4989-6272/csi-mockplugin-attacher\nI0912 13:42:50.708977       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-attacher-0\" objectUID=08bc20d6-bd15-476e-8045-7ec083d8a3f0 kind=\"Pod\" virtual=false\nI0912 13:42:50.711965       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-attacher-6c666b94cc\" objectUID=df3a7c5f-41c2-45e3-8990-27b8caf2334e kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:42:50.711970       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-4989-6272/csi-mockplugin-attacher-0\" objectUID=08bc20d6-bd15-476e-8045-7ec083d8a3f0 kind=\"Pod\" propagationPolicy=Background\nE0912 13:42:50.954302       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nI0912 13:42:51.224900       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4989\nI0912 13:42:51.333784       1 garbagecollector.go:471] \"Processing object\" object=\"container-runtime-5260/terminate-cmd-rpac28d4473-f093-4751-b475-77db101c515a\" objectUID=f152f1ad-2d69-42d9-bc5e-c8982fc213f7 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:51.338481       1 garbagecollector.go:580] \"Deleting object\" object=\"container-runtime-5260/terminate-cmd-rpac28d4473-f093-4751-b475-77db101c515a\" objectUID=f152f1ad-2d69-42d9-bc5e-c8982fc213f7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:51.402669       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"proxy-8481/proxy-service-gr5cr\" need=1 creating=1\nI0912 13:42:51.415127       1 event.go:294] \"Event occurred\" object=\"proxy-8481/proxy-service-gr5cr\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: proxy-service-gr5cr-gds7t\"\nI0912 13:42:51.452479       1 namespace_controller.go:185] Namespace has been deleted container-probe-2065\nI0912 13:42:51.717292       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-2990/pvc-blsgz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2990\\\" or manually created by system administrator\"\nI0912 13:42:51.717862       1 event.go:294] \"Event occurred\" object=\"provisioning-3334/pvc-frnt6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3334\\\" not found\"\nE0912 13:42:51.717798       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-3334/pvc-frnt6: storageclass.storage.k8s.io \"provisioning-3334\" not found\nI0912 13:42:51.738203       1 pv_controller.go:879] volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" entered phase \"Bound\"\nI0912 13:42:51.738983       1 pv_controller.go:982] volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" bound to claim \"csi-mock-volumes-2990/pvc-blsgz\"\nI0912 13:42:51.750153       1 pv_controller.go:823] claim \"csi-mock-volumes-2990/pvc-blsgz\" entered phase \"Bound\"\nE0912 13:42:51.770594       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nI0912 13:42:51.829816       1 pv_controller.go:879] volume \"local-zfjrb\" entered phase \"Available\"\nI0912 13:42:52.203908       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2990^4\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nE0912 13:42:52.238051       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-7291/pvc-zfxnw: storageclass.storage.k8s.io \"provisioning-7291\" not found\nI0912 13:42:52.238307       1 event.go:294] \"Event occurred\" object=\"provisioning-7291/pvc-zfxnw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7291\\\" not found\"\nI0912 13:42:52.320542       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9829^45d5e5ff-13cf-11ec-a6d1-6aa257ffd746\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:52.322993       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9829^45d5e5ff-13cf-11ec-a6d1-6aa257ffd746\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:52.351797       1 pv_controller.go:879] volume \"local-9ttcm\" entered phase \"Available\"\nI0912 13:42:52.557482       1 namespace_controller.go:185] Namespace has been deleted configmap-3155\nI0912 13:42:52.719311       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2990^4\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:42:52.719854       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-2990/pvc-volume-tester-xcgcd\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6c8d9229-f738-46cf-a318-f4249d666442\\\" \"\nI0912 13:42:52.781306       1 namespace_controller.go:185] Namespace has been deleted kubectl-3520\nI0912 13:42:52.835879       1 garbagecollector.go:471] \"Processing object\" object=\"services-978/execpodhcw4v\" objectUID=25247e43-4c56-451a-9edf-05c6ba81592f kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:52.836457       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-f5ae6cb2-161e-45d5-b33e-64e900adb1eb\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9829^45d5e5ff-13cf-11ec-a6d1-6aa257ffd746\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:52.839828       1 garbagecollector.go:580] \"Deleting object\" object=\"services-978/execpodhcw4v\" objectUID=25247e43-4c56-451a-9edf-05c6ba81592f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:52.841423       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-978/externalname-service\" need=2 creating=1\nI0912 13:42:52.944559       1 namespace_controller.go:185] Namespace has been deleted provisioning-3335\nI0912 13:42:52.998150       1 garbagecollector.go:471] \"Processing object\" object=\"services-978/externalname-service-vs5pg\" objectUID=ba85f222-e261-40e7-b9c5-bcd95169cd52 kind=\"Pod\" virtual=false\nI0912 13:42:52.998302       1 garbagecollector.go:471] \"Processing object\" object=\"services-978/externalname-service-8nhwj\" objectUID=61cdee2b-4a0f-4412-a48a-d31212c72e8d kind=\"Pod\" virtual=false\nI0912 13:42:53.015496       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl, uid: fc206e4e-ed3b-431f-8e30-30828aa1f186] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0912 13:42:53.016398       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" objectUID=0dceeca3-5a5f-4c51-8515-7d1fdceecc77 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:42:53.016508       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl\" objectUID=02882ed4-6b1f-4769-a2e4-81e514f3e890 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:53.016772       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl\" objectUID=fc206e4e-ed3b-431f-8e30-30828aa1f186 kind=\"Pod\" virtual=false\nI0912 13:42:53.020916       1 garbagecollector.go:595] adding [cilium.io/v2/CiliumEndpoint, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl, uid: 02882ed4-6b1f-4769-a2e4-81e514f3e890] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl, uid: fc206e4e-ed3b-431f-8e30-30828aa1f186] is deletingDependents\nI0912 13:42:53.021246       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl-my-volume-0, uid: 0dceeca3-5a5f-4c51-8515-7d1fdceecc77] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl, uid: fc206e4e-ed3b-431f-8e30-30828aa1f186] is deletingDependents\nI0912 13:42:53.024671       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" objectUID=0dceeca3-5a5f-4c51-8515-7d1fdceecc77 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0912 13:42:53.025081       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl\" objectUID=02882ed4-6b1f-4769-a2e4-81e514f3e890 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:53.030708       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" objectUID=0dceeca3-5a5f-4c51-8515-7d1fdceecc77 kind=\"PersistentVolumeClaim\" virtual=false\nI0912 13:42:53.031693       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-6328/inline-volume-tester-jd5cl\" PVC=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\"\nI0912 13:42:53.031818       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\"\nI0912 13:42:53.032158       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl\" objectUID=02882ed4-6b1f-4769-a2e4-81e514f3e890 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:53.034202       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl\" objectUID=fc206e4e-ed3b-431f-8e30-30828aa1f186 kind=\"Pod\" virtual=false\nI0912 13:42:53.036946       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl-my-volume-0, uid: 0dceeca3-5a5f-4c51-8515-7d1fdceecc77] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl, uid: fc206e4e-ed3b-431f-8e30-30828aa1f186] is deletingDependents\nI0912 13:42:53.036989       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" objectUID=0dceeca3-5a5f-4c51-8515-7d1fdceecc77 kind=\"PersistentVolumeClaim\" virtual=false\nE0912 13:42:53.054138       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nI0912 13:42:53.061691       1 namespace_controller.go:185] Namespace has been deleted provisioning-9829\nI0912 13:42:53.210633       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9829-8662/csi-hostpathplugin-0\" objectUID=ead11f6d-a813-476b-ab5a-f95d43add930 kind=\"Pod\" virtual=false\nI0912 13:42:53.210638       1 stateful_set.go:440] StatefulSet has been deleted provisioning-9829-8662/csi-hostpathplugin\nI0912 13:42:53.210662       1 garbagecollector.go:471] \"Processing object\" object=\"provisioning-9829-8662/csi-hostpathplugin-8cb9bf77f\" objectUID=0e8a4f4d-599d-462e-8128-ffbeb1bbba9d kind=\"ControllerRevision\" virtual=false\nI0912 13:42:53.215778       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9829-8662/csi-hostpathplugin-0\" objectUID=ead11f6d-a813-476b-ab5a-f95d43add930 kind=\"Pod\" propagationPolicy=Background\nI0912 13:42:53.216081       1 garbagecollector.go:580] \"Deleting object\" object=\"provisioning-9829-8662/csi-hostpathplugin-8cb9bf77f\" objectUID=0e8a4f4d-599d-462e-8128-ffbeb1bbba9d kind=\"ControllerRevision\" propagationPolicy=Background\nE0912 13:42:53.367725       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:42:53.379737       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nE0912 13:42:53.585795       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nE0912 13:42:53.759471       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nE0912 13:42:53.936527       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nE0912 13:42:54.209697       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nE0912 13:42:54.266891       1 tokens_controller.go:262] error synchronizing serviceaccount volumemode-877/default: secrets \"default-token-vgnsm\" is forbidden: unable to create new content in namespace volumemode-877 because it is being terminated\nE0912 13:42:54.528461       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nE0912 13:42:54.787447       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0912 13:42:54.981155       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nW0912 13:42:55.296890       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:42:55.297546       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-440/service-headless\" need=3 creating=1\nW0912 13:42:55.306902       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:42:55.313918       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-440/service-headless-toggled\" need=3 creating=1\nW0912 13:42:55.338451       1 utils.go:265] Service services-440/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0912 13:42:55.386516       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/service-headless-qhrth\" objectUID=7108fee5-36f8-48d7-8046-5b0aa3d2b7c2 kind=\"Pod\" virtual=false\nI0912 13:42:55.386586       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/service-headless-drzt2\" objectUID=523c68e0-9394-41d0-a4ae-1b67cbc39c25 kind=\"Pod\" virtual=false\nI0912 13:42:55.386599       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/service-headless-v8lrr\" objectUID=3272b715-aeeb-4d31-a387-2cc2a64dca67 kind=\"Pod\" virtual=false\nI0912 13:42:55.388325       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/service-headless-toggled-64rpg\" objectUID=aa8f0ced-51cc-4c97-a3f6-3b0dcfcef8f5 kind=\"Pod\" virtual=false\nI0912 13:42:55.388360       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/service-headless-toggled-dd85f\" objectUID=42158e06-4811-4101-a539-eb726bc84815 kind=\"Pod\" virtual=false\nI0912 13:42:55.388372       1 garbagecollector.go:471] \"Processing object\" object=\"services-440/service-headless-toggled-cpdkg\" objectUID=0f85578c-4fb5-4f2f-8603-bc6256597b81 kind=\"Pod\" virtual=false\nE0912 13:42:55.719784       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nE0912 13:42:55.810592       1 tokens_controller.go:262] error synchronizing serviceaccount clientset-5227/default: secrets \"default-token-gdlcv\" is forbidden: unable to create new content in namespace clientset-5227 because it is being terminated\nE0912 13:42:55.884580       1 namespace_controller.go:162] deletion of namespace services-978 failed: unexpected items still remain in namespace: services-978 for gvr: /v1, Resource=pods\nE0912 13:42:55.967429       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nI0912 13:42:55.987277       1 garbagecollector.go:471] \"Processing object\" object=\"services-6622/pod1\" objectUID=6a05e13b-9cb8-475a-bc9d-ae37eae26b74 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:42:56.001960       1 garbagecollector.go:580] \"Deleting object\" object=\"services-6622/pod1\" objectUID=6a05e13b-9cb8-475a-bc9d-ae37eae26b74 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:42:56.181164       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:42:56.218261       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nE0912 13:42:56.371839       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nE0912 13:42:56.482289       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:56.548252       1 pv_controller.go:930] claim \"provisioning-3334/pvc-frnt6\" bound to volume \"local-zfjrb\"\nI0912 13:42:56.555967       1 garbagecollector.go:471] \"Processing object\" object=\"services-6622/pod2\" objectUID=ca9b7f55-38ab-4bf8-a148-61b2e1b80fa7 kind=\"CiliumEndpoint\" virtual=false\nW0912 13:42:56.563637       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"services-6622/multi-endpoint-test\", retrying. Error: EndpointSlice informer cache is out of date\nI0912 13:42:56.566869       1 garbagecollector.go:580] \"Deleting object\" object=\"services-6622/pod2\" objectUID=ca9b7f55-38ab-4bf8-a148-61b2e1b80fa7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:42:56.573842       1 pv_controller.go:879] volume \"local-zfjrb\" entered phase \"Bound\"\nI0912 13:42:56.573876       1 pv_controller.go:982] volume \"local-zfjrb\" bound to claim \"provisioning-3334/pvc-frnt6\"\nI0912 13:42:56.574245       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"services-6622/multi-endpoint-test\" err=\"Operation cannot be fulfilled on endpoints \\\"multi-endpoint-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:42:56.574335       1 event.go:294] \"Event occurred\" object=\"services-6622/multi-endpoint-test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint services-6622/multi-endpoint-test: Operation cannot be fulfilled on endpoints \\\"multi-endpoint-test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:42:56.592490       1 pv_controller.go:823] claim \"provisioning-3334/pvc-frnt6\" entered phase \"Bound\"\nI0912 13:42:56.592795       1 pv_controller.go:930] claim \"provisioning-7291/pvc-zfxnw\" bound to volume \"local-9ttcm\"\nI0912 13:42:56.606342       1 pv_controller.go:879] volume \"local-9ttcm\" entered phase \"Bound\"\nI0912 13:42:56.606512       1 pv_controller.go:982] volume \"local-9ttcm\" bound to claim \"provisioning-7291/pvc-zfxnw\"\nI0912 13:42:56.616776       1 pv_controller.go:823] claim \"provisioning-7291/pvc-zfxnw\" entered phase \"Bound\"\nI0912 13:42:56.617499       1 pv_controller.go:930] claim \"volumemode-350/pvc-mzv5d\" bound to volume \"aws-5p2ln\"\nI0912 13:42:56.632049       1 pv_controller.go:879] volume \"aws-5p2ln\" entered phase \"Bound\"\nI0912 13:42:56.632232       1 pv_controller.go:982] volume \"aws-5p2ln\" bound to claim \"volumemode-350/pvc-mzv5d\"\nI0912 13:42:56.642067       1 pv_controller.go:823] claim \"volumemode-350/pvc-mzv5d\" entered phase \"Bound\"\nI0912 13:42:56.642699       1 pv_controller.go:930] claim \"provisioning-4155/pvc-zptps\" bound to volume \"local-mlk9v\"\nI0912 13:42:56.653058       1 pv_controller.go:879] volume \"local-mlk9v\" entered phase \"Bound\"\nI0912 13:42:56.653087       1 pv_controller.go:982] volume \"local-mlk9v\" bound to claim \"provisioning-4155/pvc-zptps\"\nI0912 13:42:56.659301       1 pv_controller.go:823] claim \"provisioning-4155/pvc-zptps\" entered phase \"Bound\"\nE0912 13:42:56.680940       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nE0912 13:42:56.913443       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nI0912 13:42:56.974106       1 event.go:294] \"Event occurred\" object=\"volume-6763/aws6nq6f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:42:57.109968       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-5664/ss-677d6db895\" objectUID=64fe8234-56a0-4e11-97c7-a1795b5543f7 kind=\"ControllerRevision\" virtual=false\nI0912 13:42:57.110381       1 stateful_set.go:440] StatefulSet has been deleted statefulset-5664/ss\nI0912 13:42:57.122043       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-5664/ss-677d6db895\" objectUID=64fe8234-56a0-4e11-97c7-a1795b5543f7 kind=\"ControllerRevision\" propagationPolicy=Background\nE0912 13:42:57.163727       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:42:57.256818       1 event.go:294] \"Event occurred\" object=\"volume-6763/aws6nq6f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0912 13:42:57.332357       1 namespace_controller.go:185] Namespace has been deleted prestop-6855\nI0912 13:42:57.488453       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-5p2ln\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb2b809b8bd110c1\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nE0912 13:42:57.492435       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0912 13:42:57.496456       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nE0912 13:42:57.991897       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nI0912 13:42:58.334128       1 garbagecollector.go:471] \"Processing object\" object=\"services-6622/multi-endpoint-test-zsr7t\" objectUID=543565eb-6356-4425-b73f-8f445645fcdf kind=\"EndpointSlice\" virtual=false\nI0912 13:42:58.339385       1 garbagecollector.go:580] \"Deleting object\" object=\"services-6622/multi-endpoint-test-zsr7t\" objectUID=543565eb-6356-4425-b73f-8f445645fcdf kind=\"EndpointSlice\" propagationPolicy=Background\nE0912 13:42:58.532316       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-9829-8662/default: secrets \"default-token-fgzgp\" is forbidden: unable to create new content in namespace provisioning-9829-8662 because it is being terminated\nE0912 13:42:58.861553       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nI0912 13:42:59.132922       1 event.go:294] \"Event occurred\" object=\"deployment-5587/test-orphan-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-orphan-deployment-847dcfb7fb to 1\"\nI0912 13:42:59.133724       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5587/test-orphan-deployment-847dcfb7fb\" need=1 creating=1\nI0912 13:42:59.145417       1 event.go:294] \"Event occurred\" object=\"deployment-5587/test-orphan-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-orphan-deployment-847dcfb7fb-r26rl\"\nI0912 13:42:59.145701       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5587/test-orphan-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-orphan-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:42:59.396466       1 namespace_controller.go:185] Namespace has been deleted volumemode-877\nI0912 13:42:59.708798       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-5p2ln\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb2b809b8bd110c1\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:42:59.708976       1 event.go:294] \"Event occurred\" object=\"volumemode-350/pod-e51fee67-cd83-44f0-abcb-e62e5c897b64\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-5p2ln\\\" \"\nI0912 13:42:59.829210       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01ceb9189da0217b0\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:42:59.831290       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01ceb9189da0217b0\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:00.274025       1 event.go:294] \"Event occurred\" object=\"webhook-3594/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0912 13:43:00.274769       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-3594/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0912 13:43:00.283436       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-3594/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:43:00.297188       1 event.go:294] \"Event occurred\" object=\"webhook-3594/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-bv86r\"\nE0912 13:43:00.338900       1 namespace_controller.go:162] deletion of namespace services-440 failed: unexpected items still remain in namespace: services-440 for gvr: /v1, Resource=pods\nI0912 13:43:00.732963       1 pv_controller.go:879] volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" entered phase \"Bound\"\nI0912 13:43:00.733020       1 pv_controller.go:982] volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" bound to claim \"volume-6763/aws6nq6f\"\nI0912 13:43:00.745159       1 pv_controller.go:823] claim \"volume-6763/aws6nq6f\" entered phase \"Bound\"\nI0912 13:43:00.851534       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-3768/pvc-6sgtb\"\nI0912 13:43:00.861827       1 pv_controller.go:640] volume \"local-6h89g\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:00.865478       1 pv_controller.go:879] volume \"local-6h89g\" entered phase \"Released\"\nI0912 13:43:00.881432       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"topology-4678/pvc-hfzmf\"\nI0912 13:43:00.886963       1 pv_controller.go:640] volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:00.890935       1 pv_controller.go:879] volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" entered phase \"Released\"\nI0912 13:43:00.893403       1 pv_controller.go:1340] isVolumeReleased[pvc-98559cd8-f1a8-4f46-9289-49e87eee3891]: volume is released\nI0912 13:43:00.911851       1 namespace_controller.go:185] Namespace has been deleted clientset-5227\nI0912 13:43:00.969519       1 pv_controller_base.go:505] deletion of claim \"volume-3768/pvc-6sgtb\" was already processed\nI0912 13:43:01.347406       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nE0912 13:43:01.452814       1 tokens_controller.go:262] error synchronizing serviceaccount container-lifecycle-hook-3508/default: secrets \"default-token-mfmql\" is forbidden: unable to create new content in namespace container-lifecycle-hook-3508 because it is being terminated\nI0912 13:43:01.556491       1 garbagecollector.go:471] \"Processing object\" object=\"proxy-8481/proxy-service-gr5cr-gds7t\" objectUID=5a42f0d1-dffc-4f8e-9713-236475f3d3d5 kind=\"Pod\" virtual=false\nI0912 13:43:01.559732       1 garbagecollector.go:580] \"Deleting object\" object=\"proxy-8481/proxy-service-gr5cr-gds7t\" objectUID=5a42f0d1-dffc-4f8e-9713-236475f3d3d5 kind=\"Pod\" propagationPolicy=Background\nE0912 13:43:01.562354       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nI0912 13:43:01.663843       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-5061\nI0912 13:43:02.542114       1 namespace_controller.go:185] Namespace has been deleted services-978\nE0912 13:43:02.674038       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-5664/default: secrets \"default-token-jqwrl\" is forbidden: unable to create new content in namespace statefulset-5664 because it is being terminated\nI0912 13:43:02.758012       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-5664/test-6h447\" objectUID=52ea74f1-cc55-4ed4-9a41-00df7cb2f9d8 kind=\"EndpointSlice\" virtual=false\nI0912 13:43:02.762687       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-5664/test-6h447\" objectUID=52ea74f1-cc55-4ed4-9a41-00df7cb2f9d8 kind=\"EndpointSlice\" propagationPolicy=Background\nE0912 13:43:03.543436       1 tokens_controller.go:262] error synchronizing serviceaccount kubectl-3402/default: secrets \"default-token-snl8g\" is forbidden: unable to create new content in namespace kubectl-3402 because it is being terminated\nI0912 13:43:03.606521       1 namespace_controller.go:185] Namespace has been deleted proxy-5813\nI0912 13:43:03.613992       1 garbagecollector.go:471] \"Processing object\" object=\"services-6622/execpodtnctl\" objectUID=5db0281c-22ca-489f-b802-2443c6979c0d kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:03.621630       1 garbagecollector.go:580] \"Deleting object\" object=\"services-6622/execpodtnctl\" objectUID=5db0281c-22ca-489f-b802-2443c6979c0d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:03.625415       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:03.625793       1 event.go:294] \"Event occurred\" object=\"volume-6763/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\\\" \"\nI0912 13:43:03.809317       1 namespace_controller.go:185] Namespace has been deleted provisioning-9829-8662\nE0912 13:43:03.827037       1 tokens_controller.go:262] error synchronizing serviceaccount services-6622/default: secrets \"default-token-mqzbz\" is forbidden: unable to create new content in namespace services-6622 because it is being terminated\nI0912 13:43:03.830755       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-3334/pvc-frnt6\"\nI0912 13:43:03.841466       1 pv_controller.go:640] volume \"local-zfjrb\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:03.849433       1 pv_controller.go:879] volume \"local-zfjrb\" entered phase \"Released\"\nE0912 13:43:03.875652       1 tokens_controller.go:262] error synchronizing serviceaccount kubelet-test-7591/default: secrets \"default-token-8p7lf\" is forbidden: unable to create new content in namespace kubelet-test-7591 because it is being terminated\nI0912 13:43:03.958264       1 pv_controller_base.go:505] deletion of claim \"provisioning-3334/pvc-frnt6\" was already processed\nI0912 13:43:05.407153       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:05.412209       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nE0912 13:43:05.420825       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:06.435093       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4989-6272\nI0912 13:43:06.476860       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7291/pvc-zfxnw\"\nI0912 13:43:06.482216       1 pv_controller.go:640] volume \"local-9ttcm\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:06.485470       1 pv_controller.go:879] volume \"local-9ttcm\" entered phase \"Released\"\nI0912 13:43:06.513282       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-3508\nI0912 13:43:06.589231       1 pv_controller_base.go:505] deletion of claim \"provisioning-7291/pvc-zfxnw\" was already processed\nI0912 13:43:06.826265       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-57/liveness-3ba4de1f-8275-46fe-8292-1b2d935ef5a3\" objectUID=2abf1af1-e927-4e47-b7d8-1f2c6c208a96 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:06.832848       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-57/liveness-3ba4de1f-8275-46fe-8292-1b2d935ef5a3\" objectUID=2abf1af1-e927-4e47-b7d8-1f2c6c208a96 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:07.126776       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-2990/pvc-blsgz\"\nI0912 13:43:07.134663       1 pv_controller.go:640] volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:07.139502       1 pv_controller.go:879] volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" entered phase \"Released\"\nI0912 13:43:07.147605       1 pv_controller.go:1340] isVolumeReleased[pvc-6c8d9229-f738-46cf-a318-f4249d666442]: volume is released\nI0912 13:43:07.598352       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-7955/pvc-p2bqb\"\nI0912 13:43:07.608628       1 pv_controller.go:640] volume \"aws-j5cc2\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:07.613953       1 pv_controller.go:879] volume \"aws-j5cc2\" entered phase \"Released\"\nI0912 13:43:07.683610       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5587/test-orphan-deployment\" objectUID=91002f1d-6564-4055-b1c6-30783c78b9b2 kind=\"Deployment\" virtual=false\nI0912 13:43:07.714262       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-5587/test-orphan-deployment\"\nI0912 13:43:07.812798       1 namespace_controller.go:185] Namespace has been deleted statefulset-5664\nW0912 13:43:07.825011       1 reconciler.go:335] Multi-Attach error for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-48-249.eu-central-1.compute.internal and can't be attached to another\nI0912 13:43:07.825128       1 event.go:294] \"Event occurred\" object=\"volume-2598/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"aws-lmp8z\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0912 13:43:08.044496       1 namespace_controller.go:185] Namespace has been deleted services-440\nI0912 13:43:08.603401       1 namespace_controller.go:185] Namespace has been deleted kubectl-3402\nE0912 13:43:08.688010       1 tokens_controller.go:262] error synchronizing serviceaccount volume-3768/default: secrets \"default-token-zvkbp\" is forbidden: unable to create new content in namespace volume-3768 because it is being terminated\nI0912 13:43:08.861136       1 namespace_controller.go:185] Namespace has been deleted services-6622\nI0912 13:43:09.455704       1 event.go:294] \"Event occurred\" object=\"volume-expand-431-5592/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:43:09.793454       1 event.go:294] \"Event occurred\" object=\"volume-expand-431/csi-hostpathtckzx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-431\\\" or manually created by system administrator\"\nI0912 13:43:09.858753       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2990^4\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:09.861307       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2990^4\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:09.927584       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-5587/test-adopt-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-adopt-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:43:10.158542       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-2990/pvc-blsgz\" was already processed\nI0912 13:43:10.379355       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6c8d9229-f738-46cf-a318-f4249d666442\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2990^4\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:10.537313       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8423-4358/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0912 13:43:10.827351       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3594/e2e-test-webhook-mzm4d\" objectUID=7ba7a0ab-7407-4b2a-a3d1-60ee9e7c7f4c kind=\"EndpointSlice\" virtual=false\nI0912 13:43:10.830883       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3594/e2e-test-webhook-mzm4d\" objectUID=7ba7a0ab-7407-4b2a-a3d1-60ee9e7c7f4c kind=\"EndpointSlice\" propagationPolicy=Background\nE0912 13:43:10.941272       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-3334/default: secrets \"default-token-2vvsc\" is forbidden: unable to create new content in namespace provisioning-3334 because it is being terminated\nI0912 13:43:10.959084       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3594/sample-webhook-deployment-78988fc6cd\" objectUID=6b79efc6-2dd8-4c61-a7aa-1ee2ef618816 kind=\"ReplicaSet\" virtual=false\nI0912 13:43:10.959442       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-3594/sample-webhook-deployment\"\nI0912 13:43:10.965836       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3594/sample-webhook-deployment-78988fc6cd\" objectUID=6b79efc6-2dd8-4c61-a7aa-1ee2ef618816 kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:43:10.980542       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3594/sample-webhook-deployment-78988fc6cd-bv86r\" objectUID=92b8a195-4fe3-40bc-bb5a-97f59bf19cef kind=\"Pod\" virtual=false\nI0912 13:43:10.985319       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3594/sample-webhook-deployment-78988fc6cd-bv86r\" objectUID=92b8a195-4fe3-40bc-bb5a-97f59bf19cef kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:11.002100       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-3594/sample-webhook-deployment-78988fc6cd-bv86r\" objectUID=d640c82c-5de2-4b83-b7d5-a063b2466850 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:11.009413       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-3594/sample-webhook-deployment-78988fc6cd-bv86r\" objectUID=d640c82c-5de2-4b83-b7d5-a063b2466850 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:11.114727       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-559/pvc-c67zh\"\nI0912 13:43:11.121759       1 pv_controller.go:640] volume \"local-29pwz\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:11.125850       1 pv_controller.go:879] volume \"local-29pwz\" entered phase \"Released\"\nI0912 13:43:11.227945       1 pv_controller_base.go:505] deletion of claim \"provisioning-559/pvc-c67zh\" was already processed\nI0912 13:43:11.418260       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-4155/pvc-zptps\"\nI0912 13:43:11.427475       1 pv_controller.go:640] volume \"local-mlk9v\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:11.432603       1 pv_controller.go:879] volume \"local-mlk9v\" entered phase \"Released\"\nI0912 13:43:11.531735       1 pv_controller_base.go:505] deletion of claim \"provisioning-4155/pvc-zptps\" was already processed\nI0912 13:43:11.549961       1 event.go:294] \"Event occurred\" object=\"volume-expand-431/csi-hostpathtckzx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-431\\\" or manually created by system administrator\"\nI0912 13:43:11.551860       1 pv_controller.go:1340] isVolumeReleased[pvc-98559cd8-f1a8-4f46-9289-49e87eee3891]: volume is released\nE0912 13:43:11.597813       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:11.962677       1 pv_controller.go:879] volume \"pvc-6d7515f1-f6ea-4fff-af44-b248de01923c\" entered phase \"Bound\"\nI0912 13:43:11.962711       1 pv_controller.go:982] volume \"pvc-6d7515f1-f6ea-4fff-af44-b248de01923c\" bound to claim \"volume-expand-431/csi-hostpathtckzx\"\nI0912 13:43:11.976878       1 pv_controller.go:823] claim \"volume-expand-431/csi-hostpathtckzx\" entered phase \"Bound\"\nE0912 13:43:12.016672       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:43:12.128594       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-57/default: secrets \"default-token-88kcr\" is forbidden: unable to create new content in namespace container-probe-57 because it is being terminated\nI0912 13:43:12.225849       1 pv_controller_base.go:505] deletion of claim \"volume-7955/pvc-p2bqb\" was already processed\nI0912 13:43:12.390364       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:12.390637       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6d7515f1-f6ea-4fff-af44-b248de01923c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-431^5f4778bd-13cf-11ec-b3ea-c23bffa60b02\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:12.399772       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:12.461703       1 pv_controller.go:1340] isVolumeReleased[pvc-98559cd8-f1a8-4f46-9289-49e87eee3891]: volume is released\nI0912 13:43:12.600303       1 pv_controller_base.go:505] deletion of claim \"topology-4678/pvc-hfzmf\" was already processed\nI0912 13:43:12.873798       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-j5cc2\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c394c73a215e355b\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:12.922022       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6d7515f1-f6ea-4fff-af44-b248de01923c\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-431^5f4778bd-13cf-11ec-b3ea-c23bffa60b02\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:12.922368       1 event.go:294] \"Event occurred\" object=\"volume-expand-431/pod-020250ce-ff19-4ce6-9780-84a3775e18df\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6d7515f1-f6ea-4fff-af44-b248de01923c\\\" \"\nI0912 13:43:13.317954       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-98559cd8-f1a8-4f46-9289-49e87eee3891\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-01ceb9189da0217b0\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:13.747763       1 namespace_controller.go:185] Namespace has been deleted volume-3768\nI0912 13:43:14.140415       1 stateful_set_control.go:521] StatefulSet statefulset-2162/ss terminating Pod ss-2 for scale down\nI0912 13:43:14.151551       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0912 13:43:14.634631       1 namespace_controller.go:185] Namespace has been deleted proxy-8481\nI0912 13:43:14.784362       1 stateful_set_control.go:521] StatefulSet statefulset-2162/ss terminating Pod ss-1 for scale down\nI0912 13:43:14.790772       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0912 13:43:15.660157       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7189-8931/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0912 13:43:15.742449       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7189-8931/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nE0912 13:43:15.803857       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-3594-markers/default: secrets \"default-token-vr5pk\" is forbidden: unable to create new content in namespace webhook-3594-markers because it is being terminated\nI0912 13:43:15.851598       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7189-8931/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0912 13:43:15.890896       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-5587/test-orphan-deployment-847dcfb7fb\" need=1 creating=1\nI0912 13:43:15.901932       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5587/test-orphan-deployment-847dcfb7fb-r26rl\" objectUID=46ec70ca-fef3-4707-b2ec-224f2c55e98d kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:15.930545       1 garbagecollector.go:471] \"Processing object\" object=\"dns-1806/dns-test-67a4786c-56a6-4d59-8402-93a4edc85fb4\" objectUID=16b6eede-222d-4487-a006-d2cc44cf512e kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:15.939442       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5587/test-orphan-deployment-847dcfb7fb-r26rl\" objectUID=46ec70ca-fef3-4707-b2ec-224f2c55e98d kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:15.961739       1 stateful_set_control.go:521] StatefulSet statefulset-2162/ss terminating Pod ss-0 for scale down\nI0912 13:43:15.964622       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-1806/dns-test-67a4786c-56a6-4d59-8402-93a4edc85fb4\" objectUID=16b6eede-222d-4487-a006-d2cc44cf512e kind=\"CiliumEndpoint\" propagationPolicy=Background\nW0912 13:43:15.971609       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"statefulset-2162/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0912 13:43:15.981030       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-691-1721/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0912 13:43:15.996089       1 event.go:294] \"Event occurred\" object=\"statefulset-2162/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0912 13:43:16.055094       1 namespace_controller.go:185] Namespace has been deleted provisioning-3334\nI0912 13:43:16.176647       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-691-1721/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0912 13:43:16.286593       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-5587/test-adopt-deployment\"\nE0912 13:43:16.486766       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0912 13:43:16.713550       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-2990/default: secrets \"default-token-d87wh\" is forbidden: unable to create new content in namespace csi-mock-volumes-2990 because it is being terminated\nI0912 13:43:17.295098       1 namespace_controller.go:185] Namespace has been deleted container-runtime-5260\nI0912 13:43:17.412969       1 namespace_controller.go:185] Namespace has been deleted container-probe-57\nI0912 13:43:17.836033       1 garbagecollector.go:471] \"Processing object\" object=\"sctp-108/hostport\" objectUID=ffb81c67-4f93-4f6b-8a86-e4969b378545 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:17.844266       1 garbagecollector.go:580] \"Deleting object\" object=\"sctp-108/hostport\" objectUID=ffb81c67-4f93-4f6b-8a86-e4969b378545 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:18.524587       1 event.go:294] \"Event occurred\" object=\"provisioning-4786-326/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:43:18.593895       1 event.go:294] \"Event occurred\" object=\"provisioning-6646/pvc-gnjcr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:43:18.712109       1 event.go:294] \"Event occurred\" object=\"provisioning-6646/pvc-gnjcr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0912 13:43:19.072696       1 event.go:294] \"Event occurred\" object=\"provisioning-4786/pvc-qlfsz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-4786\\\" or manually created by system administrator\"\nI0912 13:43:19.073094       1 event.go:294] \"Event occurred\" object=\"provisioning-4786/pvc-qlfsz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-4786\\\" or manually created by system administrator\"\nE0912 13:43:19.339038       1 tokens_controller.go:262] error synchronizing serviceaccount volume-7955/default: secrets \"default-token-zvjdn\" is forbidden: unable to create new content in namespace volume-7955 because it is being terminated\nI0912 13:43:19.569862       1 namespace_controller.go:185] Namespace has been deleted provisioning-7291\nI0912 13:43:19.752107       1 namespace_controller.go:185] Namespace has been deleted downward-api-1874\nI0912 13:43:19.851542       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:19.899365       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:19.904305       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:19.919278       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:19.919387       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:19.948320       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:20.095605       1 pv_controller.go:879] volume \"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\" entered phase \"Bound\"\nI0912 13:43:20.095643       1 pv_controller.go:982] volume \"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\" bound to claim \"provisioning-4786/pvc-qlfsz\"\nI0912 13:43:20.106566       1 pv_controller.go:823] claim \"provisioning-4786/pvc-qlfsz\" entered phase \"Bound\"\nI0912 13:43:20.925980       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4786^641de971-13cf-11ec-8029-a6db75d98da1\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:21.060626       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-59cbfbcd4f\" objectUID=49128e37-2d80-4b1f-9227-4ce0647c48b3 kind=\"ControllerRevision\" virtual=false\nI0912 13:43:21.060731       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-0\" objectUID=fea55345-213c-45aa-973b-2b72b073a5f5 kind=\"Pod\" virtual=false\nI0912 13:43:21.060627       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-2990-6570/csi-mockplugin\nI0912 13:43:21.062552       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-59cbfbcd4f\" objectUID=49128e37-2d80-4b1f-9227-4ce0647c48b3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:43:21.064269       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-0\" objectUID=fea55345-213c-45aa-973b-2b72b073a5f5 kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:21.146440       1 namespace_controller.go:185] Namespace has been deleted webhook-3594\nI0912 13:43:21.232419       1 namespace_controller.go:185] Namespace has been deleted webhook-3594-markers\nI0912 13:43:21.300886       1 namespace_controller.go:185] Namespace has been deleted deployment-5587\nI0912 13:43:21.349672       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-attacher-79759c8c5c\" objectUID=acfef6da-a93d-435a-a1f0-d04f6d24eb6b kind=\"ControllerRevision\" virtual=false\nI0912 13:43:21.350066       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-2990-6570/csi-mockplugin-attacher\nI0912 13:43:21.350174       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-attacher-0\" objectUID=672b869b-6a56-4090-a75b-c646cacf6e81 kind=\"Pod\" virtual=false\nI0912 13:43:21.361251       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-attacher-79759c8c5c\" objectUID=acfef6da-a93d-435a-a1f0-d04f6d24eb6b kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:43:21.361612       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-2990-6570/csi-mockplugin-attacher-0\" objectUID=672b869b-6a56-4090-a75b-c646cacf6e81 kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:21.439658       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7189/pvc-np7pn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-7189\\\" or manually created by system administrator\"\nI0912 13:43:21.450869       1 pv_controller.go:879] volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" entered phase \"Bound\"\nI0912 13:43:21.450959       1 pv_controller.go:982] volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" bound to claim \"csi-mock-volumes-7189/pvc-np7pn\"\nI0912 13:43:21.461309       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4786^641de971-13cf-11ec-8029-a6db75d98da1\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:21.461668       1 event.go:294] \"Event occurred\" object=\"provisioning-4786/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\\\" \"\nI0912 13:43:21.464797       1 pv_controller.go:823] claim \"csi-mock-volumes-7189/pvc-np7pn\" entered phase \"Bound\"\nI0912 13:43:21.896474       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-2990\nI0912 13:43:21.948159       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7189^4\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:22.028529       1 pv_controller.go:879] volume \"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\" entered phase \"Bound\"\nI0912 13:43:22.028567       1 pv_controller.go:982] volume \"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\" bound to claim \"provisioning-6646/pvc-gnjcr\"\nI0912 13:43:22.040643       1 pv_controller.go:823] claim \"provisioning-6646/pvc-gnjcr\" entered phase \"Bound\"\nI0912 13:43:22.270463       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:22.270650       1 event.go:294] \"Event occurred\" object=\"volume-2598/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-lmp8z\\\" \"\nI0912 13:43:22.466087       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7189^4\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:22.466280       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7189/pvc-volume-tester-vktsg\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\\\" \"\nI0912 13:43:22.532309       1 event.go:294] \"Event occurred\" object=\"volume-5197-7260/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0912 13:43:22.575508       1 garbagecollector.go:471] \"Processing object\" object=\"proxy-8536/test-service-s6m4z\" objectUID=eba5de9b-218f-4baf-a710-d6af0314cad1 kind=\"EndpointSlice\" virtual=false\nI0912 13:43:22.584491       1 garbagecollector.go:580] \"Deleting object\" object=\"proxy-8536/test-service-s6m4z\" objectUID=eba5de9b-218f-4baf-a710-d6af0314cad1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0912 13:43:22.695062       1 namespace_controller.go:185] Namespace has been deleted provisioning-559\nI0912 13:43:22.743384       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-expand-1710/csi-hostpathckcll\"\nI0912 13:43:22.754453       1 pv_controller.go:640] volume \"pvc-0eb889a3-6f45-4c51-8c81-ac4edcce5a4d\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:22.758532       1 pv_controller.go:879] volume \"pvc-0eb889a3-6f45-4c51-8c81-ac4edcce5a4d\" entered phase \"Released\"\nI0912 13:43:22.761412       1 pv_controller.go:1340] isVolumeReleased[pvc-0eb889a3-6f45-4c51-8c81-ac4edcce5a4d]: volume is released\nI0912 13:43:22.781092       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b1d2ffd32e329fd9\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:22.808664       1 pv_controller_base.go:505] deletion of claim \"volume-expand-1710/csi-hostpathckcll\" was already processed\nI0912 13:43:22.852967       1 event.go:294] \"Event occurred\" object=\"volume-5197/csi-hostpathw2dl2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-5197\\\" or manually created by system administrator\"\nI0912 13:43:22.856326       1 event.go:294] \"Event occurred\" object=\"volume-5197/csi-hostpathw2dl2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-5197\\\" or manually created by system administrator\"\nI0912 13:43:22.908650       1 expand_controller.go:289] Ignoring the PVC \"volume-expand-431/csi-hostpathtckzx\" (uid: \"6d7515f1-f6ea-4fff-af44-b248de01923c\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0912 13:43:22.908900       1 event.go:294] \"Event occurred\" object=\"volume-expand-431/csi-hostpathtckzx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0912 13:43:22.972173       1 namespace_controller.go:185] Namespace has been deleted provisioning-4155\nI0912 13:43:22.989989       1 namespace_controller.go:185] Namespace has been deleted downward-api-6026\nE0912 13:43:23.100419       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0912 13:43:23.109801       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:23.294512       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:23.297168       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:23.790240       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"ephemeral-6328/inline-volume-tester-jd5cl\" PVC=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\"\nI0912 13:43:23.790269       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\"\nI0912 13:43:23.923771       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-348/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0912 13:43:23.925198       1 event.go:294] \"Event occurred\" object=\"webhook-348/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0912 13:43:23.932943       1 event.go:294] \"Event occurred\" object=\"webhook-348/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-tbr62\"\nI0912 13:43:23.948671       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-348/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:43:23.985112       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\"\nI0912 13:43:23.994686       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328/inline-volume-tester-jd5cl\" objectUID=fc206e4e-ed3b-431f-8e30-30828aa1f186 kind=\"Pod\" virtual=false\nI0912 13:43:23.996506       1 pv_controller.go:640] volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:23.997563       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-6328, name: inline-volume-tester-jd5cl, uid: fc206e4e-ed3b-431f-8e30-30828aa1f186]\nI0912 13:43:24.002847       1 pv_controller.go:879] volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" entered phase \"Released\"\nI0912 13:43:24.010350       1 pv_controller.go:1340] isVolumeReleased[pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77]: volume is released\nI0912 13:43:24.019758       1 pv_controller_base.go:505] deletion of claim \"ephemeral-6328/inline-volume-tester-jd5cl-my-volume-0\" was already processed\nI0912 13:43:24.172155       1 namespace_controller.go:185] Namespace has been deleted pods-9349\nI0912 13:43:24.409146       1 namespace_controller.go:185] Namespace has been deleted volume-7955\nI0912 13:43:24.583590       1 garbagecollector.go:471] \"Processing object\" object=\"statefulset-2162/ss-696cb77d7d\" objectUID=8a09bd19-5103-44d1-86c7-552a470a3971 kind=\"ControllerRevision\" virtual=false\nI0912 13:43:24.583809       1 stateful_set.go:440] StatefulSet has been deleted statefulset-2162/ss\nI0912 13:43:24.596352       1 garbagecollector.go:580] \"Deleting object\" object=\"statefulset-2162/ss-696cb77d7d\" objectUID=8a09bd19-5103-44d1-86c7-552a470a3971 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:43:24.817676       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-2162/datadir-ss-0\"\nI0912 13:43:24.827064       1 pv_controller.go:640] volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:24.830687       1 pv_controller.go:879] volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" entered phase \"Released\"\nI0912 13:43:24.832843       1 pv_controller.go:1340] isVolumeReleased[pvc-6cbea32f-4f7c-47a1-a966-dde7da716596]: volume is released\nI0912 13:43:24.917743       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-2162/datadir-ss-1\"\nI0912 13:43:24.924695       1 pv_controller.go:640] volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:24.932368       1 pv_controller.go:879] volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" entered phase \"Released\"\nI0912 13:43:24.934254       1 pv_controller.go:1340] isVolumeReleased[pvc-560faf69-c019-4483-b626-794503c4bb94]: volume is released\nI0912 13:43:24.936559       1 pv_controller.go:1340] isVolumeReleased[pvc-560faf69-c019-4483-b626-794503c4bb94]: volume is released\nE0912 13:43:24.939736       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:25.031886       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"statefulset-2162/datadir-ss-2\"\nI0912 13:43:25.045482       1 pv_controller.go:640] volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:25.053783       1 pv_controller.go:879] volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" entered phase \"Released\"\nI0912 13:43:25.056113       1 pv_controller.go:1340] isVolumeReleased[pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553]: volume is released\nI0912 13:43:25.056492       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b1d2ffd32e329fd9\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:25.056801       1 event.go:294] \"Event occurred\" object=\"provisioning-6646/pod-b993d722-2a76-4323-b8a5-6777a0610a25\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\\\" \"\nI0912 13:43:25.392295       1 namespace_controller.go:185] Namespace has been deleted kubectl-7995\nI0912 13:43:26.396551       1 namespace_controller.go:185] Namespace has been deleted configmap-1079\nI0912 13:43:26.430417       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-691/pvc-jswml\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-691\\\" or manually created by system administrator\"\nI0912 13:43:26.432351       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-691/pvc-jswml\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-691\\\" or manually created by system administrator\"\nI0912 13:43:26.448498       1 pv_controller.go:879] volume \"pvc-6906ab9f-4609-4a90-a309-ef000dc8cc0c\" entered phase \"Bound\"\nI0912 13:43:26.448532       1 pv_controller.go:982] volume \"pvc-6906ab9f-4609-4a90-a309-ef000dc8cc0c\" bound to claim \"csi-mock-volumes-691/pvc-jswml\"\nI0912 13:43:26.456069       1 pv_controller.go:823] claim \"csi-mock-volumes-691/pvc-jswml\" entered phase \"Bound\"\nE0912 13:43:26.545930       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-2990-6570/default: secrets \"default-token-cqfb9\" is forbidden: unable to create new content in namespace csi-mock-volumes-2990-6570 because it is being terminated\nI0912 13:43:26.553455       1 event.go:294] \"Event occurred\" object=\"volume-5197/csi-hostpathw2dl2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-5197\\\" or manually created by system administrator\"\nI0912 13:43:26.557005       1 pv_controller.go:1340] isVolumeReleased[pvc-560faf69-c019-4483-b626-794503c4bb94]: volume is released\nI0912 13:43:26.557129       1 pv_controller.go:1340] isVolumeReleased[pvc-6cbea32f-4f7c-47a1-a966-dde7da716596]: volume is released\nI0912 13:43:26.557251       1 pv_controller.go:1340] isVolumeReleased[pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553]: volume is released\nI0912 13:43:26.833198       1 namespace_controller.go:185] Namespace has been deleted topology-4678\nI0912 13:43:26.940520       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6906ab9f-4609-4a90-a309-ef000dc8cc0c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-691^4\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:27.185462       1 namespace_controller.go:185] Namespace has been deleted secrets-6737\nI0912 13:43:27.322258       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-c19f4ace-8329-44c9-b2bb-ad8d1a31c553\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0e136abf09eb16f0d\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:27.458762       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-691/pvc-volume-tester-fm9ks\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6906ab9f-4609-4a90-a309-ef000dc8cc0c\\\" \"\nI0912 13:43:27.458864       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6906ab9f-4609-4a90-a309-ef000dc8cc0c\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-691^4\") from node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:27.813213       1 pv_controller.go:879] volume \"pvc-adacac06-321d-48df-a15b-f97ab94e9343\" entered phase \"Bound\"\nI0912 13:43:27.813267       1 pv_controller.go:982] volume \"pvc-adacac06-321d-48df-a15b-f97ab94e9343\" bound to claim \"volume-5197/csi-hostpathw2dl2\"\nI0912 13:43:27.833135       1 pv_controller.go:823] claim \"volume-5197/csi-hostpathw2dl2\" entered phase \"Bound\"\nI0912 13:43:27.883066       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8423/pvc-444f4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nE0912 13:43:27.925828       1 tokens_controller.go:262] error synchronizing serviceaccount pods-7875/default: secrets \"default-token-h6svm\" is forbidden: unable to create new content in namespace pods-7875 because it is being terminated\nI0912 13:43:28.013051       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8423/pvc-444f4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8423\\\" or manually created by system administrator\"\nI0912 13:43:28.013307       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-8423/pvc-444f4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-8423\\\" or manually created by system administrator\"\nE0912 13:43:28.201265       1 tokens_controller.go:262] error synchronizing serviceaccount volume-expand-1710/default: secrets \"default-token-k5qbq\" is forbidden: unable to create new content in namespace volume-expand-1710 because it is being terminated\nI0912 13:43:28.235853       1 pv_controller_base.go:505] deletion of claim \"statefulset-2162/datadir-ss-2\" was already processed\nI0912 13:43:28.262804       1 pv_controller.go:879] volume \"pvc-d6aeab61-cdd7-4b02-a0c7-79613d510a62\" entered phase \"Bound\"\nI0912 13:43:28.262841       1 pv_controller.go:982] volume \"pvc-d6aeab61-cdd7-4b02-a0c7-79613d510a62\" bound to claim \"csi-mock-volumes-8423/pvc-444f4\"\nI0912 13:43:28.281072       1 pv_controller.go:823] claim \"csi-mock-volumes-8423/pvc-444f4\" entered phase \"Bound\"\nI0912 13:43:28.466805       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-5p2ln\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb2b809b8bd110c1\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:28.469835       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-5p2ln\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb2b809b8bd110c1\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:28.510840       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4610/test-cleanup-controller\" need=1 creating=1\nI0912 13:43:28.518410       1 event.go:294] \"Event occurred\" object=\"deployment-4610/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-controller-wsdsg\"\nI0912 13:43:29.056984       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volumemode-350/pvc-mzv5d\"\nI0912 13:43:29.079661       1 pv_controller.go:640] volume \"aws-5p2ln\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:29.082636       1 pv_controller.go:879] volume \"aws-5p2ln\" entered phase \"Released\"\nE0912 13:43:29.369457       1 tokens_controller.go:262] error synchronizing serviceaccount sctp-108/default: secrets \"default-token-8s8lh\" is forbidden: unable to create new content in namespace sctp-108 because it is being terminated\nI0912 13:43:29.683505       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-adacac06-321d-48df-a15b-f97ab94e9343\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5197^68b9e271-13cf-11ec-8ef8-168b221ab60e\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:29.952732       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6328^4a026bd3-13cf-11ec-ba0e-867ea978be8d\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:29.958347       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6328^4a026bd3-13cf-11ec-ba0e-867ea978be8d\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:30.223475       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-adacac06-321d-48df-a15b-f97ab94e9343\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-5197^68b9e271-13cf-11ec-8ef8-168b221ab60e\") from node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:30.223659       1 event.go:294] \"Event occurred\" object=\"volume-5197/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-adacac06-321d-48df-a15b-f97ab94e9343\\\" \"\nI0912 13:43:30.503560       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-0dceeca3-5a5f-4c51-8515-7d1fdceecc77\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6328^4a026bd3-13cf-11ec-ba0e-867ea978be8d\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:31.033746       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-2598/pvc-24dgn\"\nI0912 13:43:31.043774       1 pv_controller.go:640] volume \"aws-lmp8z\" is released and reclaim policy \"Retain\" will be executed\nI0912 13:43:31.048937       1 pv_controller.go:879] volume \"aws-lmp8z\" entered phase \"Released\"\nI0912 13:43:31.167519       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4610/test-cleanup-deployment-5b4d99b59b\" need=1 creating=1\nI0912 13:43:31.168277       1 event.go:294] \"Event occurred\" object=\"deployment-4610/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-cleanup-deployment-5b4d99b59b to 1\"\nI0912 13:43:31.175917       1 event.go:294] \"Event occurred\" object=\"deployment-4610/test-cleanup-deployment-5b4d99b59b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-cleanup-deployment-5b4d99b59b-dttck\"\nI0912 13:43:31.187353       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-4610/test-cleanup-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-cleanup-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:43:31.705374       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-2990-6570\nI0912 13:43:32.098478       1 namespace_controller.go:185] Namespace has been deleted kubectl-5735\nE0912 13:43:32.628320       1 namespace_controller.go:162] deletion of namespace webhook-552 failed: unexpected items still remain in namespace: webhook-552 for gvr: /v1, Resource=pods\nE0912 13:43:32.805543       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-4276/default: secrets \"default-token-8lmkq\" is forbidden: unable to create new content in namespace provisioning-4276 because it is being terminated\nI0912 13:43:32.881611       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-7bkhk\" objectUID=2a12326a-037f-40f8-b448-60c913bd4ff1 kind=\"Pod\" virtual=false\nI0912 13:43:32.882434       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zzdb7\" objectUID=f268b2f5-d272-4f1b-a04b-885a44dc4362 kind=\"Pod\" virtual=false\nI0912 13:43:32.882782       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zlgtk\" objectUID=7173d0a3-e6a9-450e-8d51-2c454d8d3c98 kind=\"Pod\" virtual=false\nI0912 13:43:32.886179       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-7bkhk\" objectUID=2a12326a-037f-40f8-b448-60c913bd4ff1 kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:32.889901       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zzdb7\" objectUID=f268b2f5-d272-4f1b-a04b-885a44dc4362 kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:32.890513       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zlgtk\" objectUID=7173d0a3-e6a9-450e-8d51-2c454d8d3c98 kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:32.906275       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-7bkhk\" objectUID=5f7c66de-1b19-4a88-8f95-de4b48de0ea7 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:32.918482       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-7bkhk\" objectUID=5f7c66de-1b19-4a88-8f95-de4b48de0ea7 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:32.924779       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zlgtk\" objectUID=a21d783b-b3bd-4c6a-8287-77c43967985f kind=\"CiliumEndpoint\" virtual=false\nW0912 13:43:32.925114       1 endpointslice_controller.go:306] Error syncing endpoint slices for service \"deployment-5161/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0912 13:43:32.949312       1 endpoints_controller.go:374] \"Error syncing endpoints, retrying\" service=\"deployment-5161/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:43:32.950105       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint deployment-5161/test-rolling-update-with-lb: Operation cannot be fulfilled on endpoints \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:43:32.950457       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zzdb7\" objectUID=d076cbd0-326d-4d75-9616-62200bedcaf0 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:32.966752       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zlgtk\" objectUID=a21d783b-b3bd-4c6a-8287-77c43967985f kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:32.970184       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-5161/test-rolling-update-with-lb-686dff95d9-zzdb7\" objectUID=d076cbd0-326d-4d75-9616-62200bedcaf0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:43:33.017288       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-5161/default: secrets \"default-token-jpfq2\" is forbidden: unable to create new content in namespace deployment-5161 because it is being terminated\nI0912 13:43:33.277906       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1710\nE0912 13:43:33.315362       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:33.380793       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-5161/test-rolling-update-with-lb\"\nI0912 13:43:33.450537       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1710-6821/csi-hostpathplugin-5f97f7987b\" objectUID=19ef14ce-a971-43ef-916d-12e784d250bb kind=\"ControllerRevision\" virtual=false\nI0912 13:43:33.451096       1 stateful_set.go:440] StatefulSet has been deleted volume-expand-1710-6821/csi-hostpathplugin\nI0912 13:43:33.451253       1 garbagecollector.go:471] \"Processing object\" object=\"volume-expand-1710-6821/csi-hostpathplugin-0\" objectUID=93403358-487f-4f0a-b895-85dc2c25d15d kind=\"Pod\" virtual=false\nI0912 13:43:33.453427       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1710-6821/csi-hostpathplugin-5f97f7987b\" objectUID=19ef14ce-a971-43ef-916d-12e784d250bb kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:43:33.453452       1 garbagecollector.go:580] \"Deleting object\" object=\"volume-expand-1710-6821/csi-hostpathplugin-0\" objectUID=93403358-487f-4f0a-b895-85dc2c25d15d kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:33.455975       1 controller.go:385] Deleting existing load balancer for service deployment-5161/test-rolling-update-with-lb\nI0912 13:43:33.457556       1 event.go:294] \"Event occurred\" object=\"deployment-5161/test-rolling-update-with-lb\" kind=\"Service\" apiVersion=\"v1\" type=\"Normal\" reason=\"DeletingLoadBalancer\" message=\"Deleting load balancer\"\nE0912 13:43:33.462988       1 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"test-rolling-update-with-lb.16a416f69c5bed48\", GenerateName:\"\", Namespace:\"deployment-5161\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"deployment-5161\", Name:\"test-rolling-update-with-lb\", UID:\"2c3939e8-e2df-476a-812c-11a28989a5e2\", APIVersion:\"v1\", ResourceVersion:\"28550\", FieldPath:\"\"}, Reason:\"DeletingLoadBalancer\", Message:\"Deleting load balancer\", Source:v1.EventSource{Component:\"service-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0479ee15b2dfb48, ext:801845224364, loc:(*time.Location)(0x6cb2280)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0479ee15b2dfb48, ext:801845224364, loc:(*time.Location)(0x6cb2280)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"test-rolling-update-with-lb.16a416f69c5bed48\" is forbidden: unable to create new content in namespace deployment-5161 because it is being terminated' (will not retry!)\nI0912 13:43:33.584558       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6cbea32f-4f7c-47a1-a966-dde7da716596\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-00231b8d3aeb0c4eb\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:33.723508       1 aws.go:4528] Removing rule for traffic from the load balancer (sg-02bd23742a3f8d10a) to instance (sg-01e9d3085490866af)\nI0912 13:43:33.784319       1 aws.go:3100] Comparing sg-02bd23742a3f8d10a to sg-01e9d3085490866af\nI0912 13:43:33.784339       1 aws.go:3100] Comparing sg-02bd23742a3f8d10a to sg-02bd23742a3f8d10a\nI0912 13:43:33.784345       1 aws.go:3291] Removing security group ingress: sg-01e9d3085490866af [{\n  IpProtocol: \"-1\",\n  UserIdGroupPairs: [{\n      GroupId: \"sg-02bd23742a3f8d10a\"\n    }]\n}]\nI0912 13:43:34.235168       1 aws.go:4717] Ignoring DependencyViolation while deleting load-balancer security group (sg-02bd23742a3f8d10a), assuming because LB is in process of deleting\nI0912 13:43:34.235195       1 aws.go:4741] Waiting for load-balancer to delete so we can delete security groups: test-rolling-update-with-lb\nI0912 13:43:34.457996       1 namespace_controller.go:185] Namespace has been deleted sctp-108\nE0912 13:43:34.664684       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:35.304564       1 pv_controller_base.go:505] deletion of claim \"volumemode-350/pvc-mzv5d\" was already processed\nI0912 13:43:35.407169       1 namespace_controller.go:185] Namespace has been deleted certificates-1157\nI0912 13:43:35.838125       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6328\nI0912 13:43:35.915103       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-5p2ln\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0eb2b809b8bd110c1\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:36.008430       1 stateful_set.go:440] StatefulSet has been deleted ephemeral-6328-4455/csi-hostpathplugin\nI0912 13:43:36.008593       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328-4455/csi-hostpathplugin-0\" objectUID=594ace9a-7c1f-4350-b5fa-ce5fc915ee97 kind=\"Pod\" virtual=false\nI0912 13:43:36.008572       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6328-4455/csi-hostpathplugin-6869db4bf4\" objectUID=d25a3acf-123d-4bf4-b9d8-aeda38be6c56 kind=\"ControllerRevision\" virtual=false\nI0912 13:43:36.017910       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6328-4455/csi-hostpathplugin-6869db4bf4\" objectUID=d25a3acf-123d-4bf4-b9d8-aeda38be6c56 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:43:36.018309       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6328-4455/csi-hostpathplugin-0\" objectUID=594ace9a-7c1f-4350-b5fa-ce5fc915ee97 kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:36.380904       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-6885/awsrdtq7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0912 13:43:36.619870       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-6885/awsrdtq7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"ebs.csi.aws.com\\\" or manually created by system administrator\"\nI0912 13:43:36.945235       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-560faf69-c019-4483-b626-794503c4bb94\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-07b2c1d13bbc0d08b\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:37.343055       1 event.go:294] \"Event occurred\" object=\"deployment-4610/test-cleanup-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-cleanup-controller to 0\"\nI0912 13:43:37.343475       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"deployment-4610/test-cleanup-controller\" need=0 deleting=1\nI0912 13:43:37.343551       1 replica_set.go:227] \"Found related ReplicaSets\" replicaSet=\"deployment-4610/test-cleanup-controller\" relatedReplicaSets=[test-cleanup-deployment-5b4d99b59b test-cleanup-controller]\nI0912 13:43:37.343650       1 controller_utils.go:592] \"Deleting pod\" controller=\"test-cleanup-controller\" pod=\"deployment-4610/test-cleanup-controller-wsdsg\"\nI0912 13:43:37.357027       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4610/test-cleanup-controller-wsdsg\" objectUID=721b534c-843e-4813-99ee-d40f09b7468c kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:37.357514       1 event.go:294] \"Event occurred\" object=\"deployment-4610/test-cleanup-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-cleanup-controller-wsdsg\"\nI0912 13:43:37.368513       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4610/test-cleanup-controller-wsdsg\" objectUID=721b534c-843e-4813-99ee-d40f09b7468c kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:37.985213       1 namespace_controller.go:185] Namespace has been deleted provisioning-4276\nI0912 13:43:39.064609       1 namespace_controller.go:185] Namespace has been deleted tables-1808\nI0912 13:43:39.630511       1 graph_builder.go:587] add [v1/Pod, namespace: csi-mock-volumes-691, name: inline-volume-dtsdc, uid: a2908b0f-074e-4d47-a667-e45768829cd6] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0912 13:43:39.631344       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-691/inline-volume-dtsdc\" objectUID=a2908b0f-074e-4d47-a667-e45768829cd6 kind=\"Pod\" virtual=false\nI0912 13:43:39.640913       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: csi-mock-volumes-691, name: inline-volume-dtsdc, uid: a2908b0f-074e-4d47-a667-e45768829cd6]\nI0912 13:43:39.790229       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-348/e2e-test-webhook-qscxq\" objectUID=8409433d-2c61-46f3-90a1-bb9aaaec856d kind=\"EndpointSlice\" virtual=false\nI0912 13:43:39.794199       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-348/e2e-test-webhook-qscxq\" objectUID=8409433d-2c61-46f3-90a1-bb9aaaec856d kind=\"EndpointSlice\" propagationPolicy=Background\nI0912 13:43:39.940323       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-348/sample-webhook-deployment-78988fc6cd\" objectUID=c4660f45-44a4-421a-b9e6-94ff49c4eb0d kind=\"ReplicaSet\" virtual=false\nI0912 13:43:39.940959       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-348/sample-webhook-deployment\"\nI0912 13:43:39.946224       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-348/sample-webhook-deployment-78988fc6cd\" objectUID=c4660f45-44a4-421a-b9e6-94ff49c4eb0d kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:43:39.950014       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-348/sample-webhook-deployment-78988fc6cd-tbr62\" objectUID=14c0d36f-db59-40e4-bfe2-7246d94d4adf kind=\"Pod\" virtual=false\nI0912 13:43:39.954454       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-348/sample-webhook-deployment-78988fc6cd-tbr62\" objectUID=14c0d36f-db59-40e4-bfe2-7246d94d4adf kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:39.967055       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:39.975204       1 garbagecollector.go:471] \"Processing object\" object=\"webhook-348/sample-webhook-deployment-78988fc6cd-tbr62\" objectUID=3b695493-5053-49e8-8445-1837e36cbf87 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:39.975467       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:39.979297       1 garbagecollector.go:580] \"Deleting object\" object=\"webhook-348/sample-webhook-deployment-78988fc6cd-tbr62\" objectUID=3b695493-5053-49e8-8445-1837e36cbf87 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:40.018835       1 pv_controller_base.go:505] deletion of claim \"statefulset-2162/datadir-ss-0\" was already processed\nI0912 13:43:40.058829       1 pv_controller.go:879] volume \"pvc-05adc6af-2da8-45ac-bf01-1e9e030a4f9c\" entered phase \"Bound\"\nI0912 13:43:40.058868       1 pv_controller.go:982] volume \"pvc-05adc6af-2da8-45ac-bf01-1e9e030a4f9c\" bound to claim \"fsgroupchangepolicy-6885/awsrdtq7\"\nI0912 13:43:40.068291       1 pv_controller.go:823] claim \"fsgroupchangepolicy-6885/awsrdtq7\" entered phase \"Bound\"\nI0912 13:43:40.169975       1 pv_controller_base.go:505] deletion of claim \"statefulset-2162/datadir-ss-1\" was already processed\nI0912 13:43:40.654961       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-9858/test-new-deployment-847dcfb7fb\" need=1 creating=1\nI0912 13:43:40.655841       1 event.go:294] \"Event occurred\" object=\"deployment-9858/test-new-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-new-deployment-847dcfb7fb to 1\"\nI0912 13:43:40.672896       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-05adc6af-2da8-45ac-bf01-1e9e030a4f9c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-08c14af5e5fa0ddbc\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:40.673928       1 event.go:294] \"Event occurred\" object=\"deployment-9858/test-new-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-new-deployment-847dcfb7fb-8jp74\"\nI0912 13:43:40.683735       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-9858/test-new-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-new-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0912 13:43:40.908339       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-7032/default: secrets \"default-token-sxjgq\" is forbidden: unable to create new content in namespace provisioning-7032 because it is being terminated\nI0912 13:43:41.215746       1 garbagecollector.go:471] \"Processing object\" object=\"dns-1806/dns-test-f4c5bc5f-d87f-44dc-a726-3d253a0bd8f0\" objectUID=c443d6a4-e752-4aed-9119-91ad75351fe6 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:41.230595       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-1806/dns-test-f4c5bc5f-d87f-44dc-a726-3d253a0bd8f0\" objectUID=c443d6a4-e752-4aed-9119-91ad75351fe6 kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:43:41.609950       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0912 13:43:41.789972       1 reconciler.go:335] Multi-Attach error for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" Volume is already exclusively attached to node ip-172-20-48-249.eu-central-1.compute.internal and can't be attached to another\nI0912 13:43:41.790154       1 event.go:294] \"Event occurred\" object=\"volume-6763/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\\\" Volume is already exclusively attached to one node and can't be attached to another\"\nI0912 13:43:42.408783       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:42.411445       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:42.677692       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-8423/pvc-444f4\"\nI0912 13:43:42.685247       1 pv_controller.go:640] volume \"pvc-d6aeab61-cdd7-4b02-a0c7-79613d510a62\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:42.691467       1 pv_controller.go:879] volume \"pvc-d6aeab61-cdd7-4b02-a0c7-79613d510a62\" entered phase \"Released\"\nI0912 13:43:42.693310       1 pv_controller.go:1340] isVolumeReleased[pvc-d6aeab61-cdd7-4b02-a0c7-79613d510a62]: volume is released\nI0912 13:43:42.819292       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-8423/pvc-444f4\" was already processed\nE0912 13:43:43.242168       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-4610/default: secrets \"default-token-4jnbp\" is forbidden: unable to create new content in namespace deployment-4610 because it is being terminated\nI0912 13:43:43.336194       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"deployment-4610/test-cleanup-deployment-5b4d99b59b\" need=1 creating=1\nI0912 13:43:43.340933       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-4610/test-cleanup-deployment-5b4d99b59b-dttck\" objectUID=4076c7b3-c946-438c-9435-8da410ed14e8 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:43.372510       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-4610/test-cleanup-deployment-5b4d99b59b-dttck\" objectUID=4076c7b3-c946-438c-9435-8da410ed14e8 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:43.418065       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-4610/test-cleanup-deployment\"\nI0912 13:43:43.988464       1 namespace_controller.go:185] Namespace has been deleted volume-expand-1710-6821\nE0912 13:43:44.019258       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-3735/pvc-27wbx: storageclass.storage.k8s.io \"provisioning-3735\" not found\nI0912 13:43:44.019699       1 event.go:294] \"Event occurred\" object=\"provisioning-3735/pvc-27wbx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3735\\\" not found\"\nI0912 13:43:44.131373       1 pv_controller.go:879] volume \"local-6x84h\" entered phase \"Available\"\nI0912 13:43:44.329551       1 expand_controller.go:289] Ignoring the PVC \"csi-mock-volumes-7189/pvc-np7pn\" (uid: \"523ec1e5-3564-4f5d-8507-09788d22f6f5\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0912 13:43:44.329730       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-7189/pvc-np7pn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nI0912 13:43:44.453095       1 aws.go:4717] Ignoring DependencyViolation while deleting load-balancer security group (sg-02bd23742a3f8d10a), assuming because LB is in process of deleting\nI0912 13:43:44.453174       1 aws.go:4741] Waiting for load-balancer to delete so we can delete security groups: test-rolling-update-with-lb\nE0912 13:43:44.615744       1 tokens_controller.go:262] error synchronizing serviceaccount webhook-348/default: secrets \"default-token-zlczn\" is forbidden: unable to create new content in namespace webhook-348 because it is being terminated\nW0912 13:43:45.328332       1 reconciler.go:376] Multi-Attach error for volume \"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b1d2ffd32e329fd9\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" Volume is already used by pods provisioning-6646/pod-b993d722-2a76-4323-b8a5-6777a0610a25 on node ip-172-20-60-94.eu-central-1.compute.internal\nI0912 13:43:45.328649       1 event.go:294] \"Event occurred\" object=\"provisioning-6646/pvc-volume-tester-writer-x49l5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedAttachVolume\" message=\"Multi-Attach error for volume \\\"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\\\" Volume is already used by pod(s) pod-b993d722-2a76-4323-b8a5-6777a0610a25\"\nI0912 13:43:45.616512       1 namespace_controller.go:185] Namespace has been deleted volumemode-350\nI0912 13:43:45.858504       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9858/test-new-deployment-847dcfb7fb\" objectUID=88befc22-50d5-4e13-8b5d-a6b0c17a56ba kind=\"ReplicaSet\" virtual=false\nI0912 13:43:45.859003       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-9858/test-new-deployment\"\nI0912 13:43:45.861108       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9858/test-new-deployment-847dcfb7fb\" objectUID=88befc22-50d5-4e13-8b5d-a6b0c17a56ba kind=\"ReplicaSet\" propagationPolicy=Background\nI0912 13:43:45.863547       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9858/test-new-deployment-847dcfb7fb-8jp74\" objectUID=953bb140-6b44-4fda-83c6-1ff71d4b9438 kind=\"Pod\" virtual=false\nI0912 13:43:45.866698       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9858/test-new-deployment-847dcfb7fb-8jp74\" objectUID=953bb140-6b44-4fda-83c6-1ff71d4b9438 kind=\"Pod\" propagationPolicy=Background\nI0912 13:43:45.875580       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-9858/test-new-deployment-847dcfb7fb-8jp74\" objectUID=2e374f14-9bc8-4e16-ada5-3d175312266e kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:45.881293       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-9858/test-new-deployment-847dcfb7fb-8jp74\" objectUID=2e374f14-9bc8-4e16-ada5-3d175312266e kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:43:46.234223       1 namespace_controller.go:185] Namespace has been deleted provisioning-7032\nI0912 13:43:46.328037       1 garbagecollector.go:471] \"Processing object\" object=\"dns-1806/dns-test-2e7554c6-d67b-4f57-bd46-8d816de9ccc0\" objectUID=0b974bed-2cc9-4c38-8527-b31b2a8071cf kind=\"CiliumEndpoint\" virtual=false\nI0912 13:43:46.334219       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-1806/dns-test-2e7554c6-d67b-4f57-bd46-8d816de9ccc0\" objectUID=0b974bed-2cc9-4c38-8527-b31b2a8071cf kind=\"CiliumEndpoint\" propagationPolicy=Background\nE0912 13:43:46.438983       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-4878/pvc-pgsgh: storageclass.storage.k8s.io \"provisioning-4878\" not found\nI0912 13:43:46.440378       1 event.go:294] \"Event occurred\" object=\"provisioning-4878/pvc-pgsgh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4878\\\" not found\"\nI0912 13:43:46.542046       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-7591\nI0912 13:43:46.555008       1 pv_controller.go:879] volume \"local-zhlj8\" entered phase \"Available\"\nI0912 13:43:46.653368       1 namespace_controller.go:185] Namespace has been deleted ephemeral-6328-4455\nI0912 13:43:46.682983       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"kubectl-3690/update-demo-nautilus\" need=2 creating=2\nI0912 13:43:46.689337       1 event.go:294] \"Event occurred\" object=\"kubectl-3690/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-b9j7v\"\nI0912 13:43:46.693986       1 event.go:294] \"Event occurred\" object=\"kubectl-3690/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: update-demo-nautilus-wz2sw\"\nI0912 13:43:47.102909       1 resource_quota_controller.go:307] Resource quota has been deleted resourcequota-6030/test-quota\nI0912 13:43:48.207388       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-05adc6af-2da8-45ac-bf01-1e9e030a4f9c\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-08c14af5e5fa0ddbc\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:48.207609       1 event.go:294] \"Event occurred\" object=\"fsgroupchangepolicy-6885/pod-1890e631-ab9d-44e1-af67-759cc160e3d8\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-05adc6af-2da8-45ac-bf01-1e9e030a4f9c\\\" \"\nI0912 13:43:48.349447       1 namespace_controller.go:185] Namespace has been deleted provisioning-957\nI0912 13:43:48.460331       1 namespace_controller.go:185] Namespace has been deleted deployment-4610\nI0912 13:43:48.987334       1 namespace_controller.go:185] Namespace has been deleted proxy-8536\nI0912 13:43:49.212931       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-7189/pvc-np7pn\"\nI0912 13:43:49.219049       1 pv_controller.go:640] volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" is released and reclaim policy \"Delete\" will be executed\nI0912 13:43:49.221783       1 pv_controller.go:879] volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" entered phase \"Released\"\nI0912 13:43:49.225575       1 pv_controller.go:1340] isVolumeReleased[pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5]: volume is released\nI0912 13:43:49.727735       1 namespace_controller.go:185] Namespace has been deleted webhook-348\nI0912 13:43:49.757004       1 event.go:294] \"Event occurred\" object=\"provisioning-4786/pvc-lhk68\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-4786\\\" or manually created by system administrator\"\nI0912 13:43:49.820700       1 pv_controller.go:879] volume \"pvc-edfcd390-0376-4cff-b5e6-7978d638a17b\" entered phase \"Bound\"\nI0912 13:43:49.820738       1 pv_controller.go:982] volume \"pvc-edfcd390-0376-4cff-b5e6-7978d638a17b\" bound to claim \"provisioning-4786/pvc-lhk68\"\nI0912 13:43:49.827394       1 pv_controller.go:823] claim \"provisioning-4786/pvc-lhk68\" entered phase \"Bound\"\nI0912 13:43:49.835307       1 namespace_controller.go:185] Namespace has been deleted webhook-348-markers\nI0912 13:43:49.873749       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-edfcd390-0376-4cff-b5e6-7978d638a17b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4786^75d5a2df-13cf-11ec-8029-a6db75d98da1\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:49.900980       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:49.975967       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:50.085180       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4786^641de971-13cf-11ec-8029-a6db75d98da1\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:50.091131       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4786^641de971-13cf-11ec-8029-a6db75d98da1\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:50.389533       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-edfcd390-0376-4cff-b5e6-7978d638a17b\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4786^75d5a2df-13cf-11ec-8029-a6db75d98da1\") from node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nI0912 13:43:50.389766       1 event.go:294] \"Event occurred\" object=\"provisioning-4786/hostpath-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-edfcd390-0376-4cff-b5e6-7978d638a17b\\\" \"\nI0912 13:43:50.599409       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-5ef7699d-44b7-4f56-b9c1-8fc0ff7c9d8e\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-4786^641de971-13cf-11ec-8029-a6db75d98da1\") on node \"ip-172-20-45-127.eu-central-1.compute.internal\" \nE0912 13:43:50.756547       1 tokens_controller.go:262] error synchronizing serviceaccount statefulset-2162/default: secrets \"default-token-n9x94\" is forbidden: unable to create new content in namespace statefulset-2162 because it is being terminated\nI0912 13:43:50.810195       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-5801/rc-test\" need=1 creating=1\nI0912 13:43:50.816776       1 event.go:294] \"Event occurred\" object=\"replication-controller-5801/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-v58ct\"\nE0912 13:43:51.536989       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0912 13:43:52.169322       1 tokens_controller.go:262] error synchronizing serviceaccount node-lease-test-9056/default: secrets \"default-token-s2kjh\" is forbidden: unable to create new content in namespace node-lease-test-9056 because it is being terminated\nI0912 13:43:52.252772       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume \"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-02a8edb81457b618d\") from node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:52.252959       1 event.go:294] \"Event occurred\" object=\"volume-6763/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-6dffabec-79e9-42c4-bc6a-dde7b3d83060\\\" \"\nE0912 13:43:52.313755       1 pv_controller.go:1451] error finding provisioning plugin for claim provisioning-8300/pvc-652gd: storageclass.storage.k8s.io \"provisioning-8300\" not found\nI0912 13:43:52.313880       1 event.go:294] \"Event occurred\" object=\"provisioning-8300/pvc-652gd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-8300\\\" not found\"\nI0912 13:43:52.426123       1 pv_controller.go:879] volume \"local-rvm7r\" entered phase \"Available\"\nI0912 13:43:52.437841       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7189^4\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:52.443642       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7189^4\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nE0912 13:43:52.515040       1 tokens_controller.go:262] error synchronizing serviceaccount resourcequota-6030/default: secrets \"default-token-dn597\" is forbidden: unable to create new content in namespace resourcequota-6030 because it is being terminated\nI0912 13:43:52.577896       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-6104-296/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0912 13:43:52.765004       1 pv_controller_base.go:505] deletion of claim \"volume-2598/pvc-24dgn\" was already processed\nI0912 13:43:52.805969       1 event.go:294] \"Event occurred\" object=\"csi-mock-volumes-6104-296/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0912 13:43:52.981458       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"pvc-523ec1e5-3564-4f5d-8507-09788d22f6f5\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-7189^4\") on node \"ip-172-20-48-249.eu-central-1.compute.internal\" \nI0912 13:43:53.066391       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"replication-controller-5801/rc-test\" need=2 creating=1\nI0912 13:43:53.071227       1 event.go:294] \"Event occurred\" object=\"replication-controller-5801/rc-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rc-test-9l4x8\"\nI0912 13:43:53.365715       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b1d2ffd32e329fd9\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:53.368496       1 operation_generator.go:1577] Verified volume is safe to detach for volume \"pvc-f0462d9f-2288-44cf-8644-94d4cb32becd\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0b1d2ffd32e329fd9\") on node \"ip-172-20-60-94.eu-central-1.compute.internal\" \nI0912 13:43:53.658918       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume \"aws-lmp8z\" (UniqueName: \"kubernetes.io/csi/ebs.csi.aws.com^vol-0c69bf84986990868\") on node \"ip-172-20-34-134.eu-central-1.compute.internal\" \nI0912 13:43:54.369120       1 namespace_controller.go:185] Namespace has been deleted pods-7875\nI0912 13:43:54.590880       1 aws.go:4717] Ignoring DependencyViolation while deleting load-balancer security group (sg-02bd23742a3f8d10a), assuming because LB is in process of deleting\nI0912 13:43:54.591117       1 aws.go:4741] Waiting for load-balancer to delete so we can delete security groups: test-rolling-update-with-lb\nI0912 13:43:55.345528       1 replica_set.go:599] \"Too many replicas\" replicaSet=\"kubectl-3690/update-demo-nautilus\" need=1 deleting=1\nE0912 13:43:55.345945       1 replica_set.go:205] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{update-demo-nautilus  kubectl-3690  f12bb6fa-7f44-43ab-9a18-3f87e16658ef 29401 2 2021-09-12 13:43:46 +0000 UTC <nil> <nil> map[name:update-demo version:nautilus] map[] [] []  [{kubectl Update v1 <nil> FieldsV1 {\"f:spec\":{\"f:replicas\":{}}} scale} {kubectl-create Update v1 2021-09-12 13:43:46 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:name\":{},\"f:version\":{}}},\"f:spec\":{\"f:selector\":{},\"f:template\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{},\"f:labels\":{\".\":{},\"f:name\":{},\"f:version\":{}}},\"f:spec\":{\".\":{},\"f:containers\":{\".\":{},\"k:{\\\"name\\\":\\\"update-demo\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:protocol\":{}}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}} } {kube-controller-manager Update v1 2021-09-12 13:43:52 +0000 UTC FieldsV1 {\"f:status\":{\"f:availableReplicas\":{},\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:readyReplicas\":{},\"f:replicas\":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: update-demo,version: nautilus,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:update-demo version:nautilus] map[] [] []  []} {[] [] [{update-demo k8s.gcr.io/e2e-test-images/nautilus:1.4 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x40039c35f8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:1,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},}\nI0912 13:43:55.346081       1 controller_utils.go:592] \"Deleting pod\" controller=\"update-demo-nautilus\" pod=\"kubectl-3690/update-demo-nautilus-b9j7v\"\nI0912 13:43:55.347534       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"webhook-4747/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0912 13:43:55.351213       1 event.go:294] \"Event occurred\" object=\"webhook-4747/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0912 13:43:55.357272       1 event.go:294] \"Event occurred\" object=\"kubectl-3690/update-demo-nautilus\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: update-demo-nautilus-b9j7v\"\nI0912 13:43:55.365926       1 event.go:294] \"Event occurred\" object=\"webhook-4747/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-rplbr\"\nI0912 13:43:55.370561       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-4747/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0912 13:43:55.414684       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-5801/rc-test\" objectUID=edf52b28-a84e-4fc8-8d0d-876295ff928d kind=\"ReplicationController\" virtual=false\nI0912 13:43:55.421285       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-5801/rc-test\" objectUID=edf52b28-a84e-4fc8-8d0d-876295ff928d kind=\"ReplicationController\" virtual=false\nE0912 13:43:55.428788       1 replica_set.go:536] sync \"replication-controller-5801/rc-test\" failed with Operation cannot be fulfilled on replicationcontrollers \"rc-test\": StorageError: invalid object, Code: 4, Key: /registry/controllers/replication-controller-5801/rc-test, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: edf52b28-a84e-4fc8-8d0d-876295ff928d, UID in object meta: \nE0912 13:43:55.449851       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:55.793582       1 namespace_controller.go:185] Namespace has been deleted statefulset-2162\nI0912 13:43:56.166829       1 namespace_controller.go:185] Namespace has been deleted emptydir-8570\nI0912 13:43:56.196290       1 namespace_controller.go:185] Namespace has been deleted projected-5545\nI0912 13:43:56.249022       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-7189/pvc-np7pn\" was already processed\nE0912 13:43:56.464925       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:43:56.553596       1 pv_controller.go:930] claim \"provisioning-3735/pvc-27wbx\" bound to volume \"local-6x84h\"\nI0912 13:43:56.562571       1 pv_controller.go:879] volume \"local-6x84h\" entered phase \"Bound\"\nI0912 13:43:56.562606       1 pv_controller.go:982] volume \"local-6x84h\" bound to claim \"provisioning-3735/pvc-27wbx\"\nI0912 13:43:56.570168       1 pv_controller.go:823] claim \"provisioning-3735/pvc-27wbx\" entered phase \"Bound\"\nI0912 13:43:56.570928       1 pv_controller.go:930] claim \"provisioning-8300/pvc-652gd\" bound to volume \"local-rvm7r\"\nI0912 13:43:56.580844       1 pv_controller.go:879] volume \"local-rvm7r\" entered phase \"Bound\"\nI0912 13:43:56.580885       1 pv_controller.go:982] volume \"local-rvm7r\" bound to claim \"provisioning-8300/pvc-652gd\"\nI0912 13:43:56.587243       1 pv_controller.go:823] claim \"provisioning-8300/pvc-652gd\" entered phase \"Bound\"\nI0912 13:43:56.587546       1 pv_controller.go:930] claim \"provisioning-4878/pvc-pgsgh\" bound to volume \"local-zhlj8\"\nI0912 13:43:56.596749       1 pv_controller.go:879] volume \"local-zhlj8\" entered phase \"Bound\"\nI0912 13:43:56.596903       1 pv_controller.go:982] volume \"local-zhlj8\" bound to claim \"provisioning-4878/pvc-pgsgh\"\nI0912 13:43:56.603340       1 pv_controller.go:823] claim \"provisioning-4878/pvc-pgsgh\" entered phase \"Bound\"\nI0912 13:43:56.954739       1 namespace_controller.go:185] Namespace has been deleted deployment-9858\nI0912 13:43:57.035533       1 namespace_controller.go:185] Namespace has been deleted dns-1806\nI0912 13:43:57.213764       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-9056\nI0912 13:43:57.651418       1 namespace_controller.go:185] Namespace has been deleted resourcequota-6030\nI0912 13:43:57.705263       1 replica_set.go:563] \"Too few replicas\" replicaSet=\"services-5923/affinity-clusterip-timeout\" need=3 creating=3\nI0912 13:43:57.710277       1 event.go:294] \"Event occurred\" object=\"services-5923/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-rgd24\"\nI0912 13:43:57.722354       1 event.go:294] \"Event occurred\" object=\"services-5923/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-5p6s2\"\nI0912 13:43:57.727166       1 event.go:294] \"Event occurred\" object=\"services-5923/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-k9scp\"\nI0912 13:43:58.017222       1 namespace_controller.go:185] Namespace has been deleted endpointslice-1055\nE0912 13:43:58.571476       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-7403/default: secrets \"default-token-2wzvc\" is forbidden: unable to create new content in namespace nettest-7403 because it is being terminated\nI0912 13:43:58.800296       1 namespace_controller.go:185] Namespace has been deleted volume-2598\nE0912 13:43:58.956728       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0912 13:44:00.024679       1 garbagecollector.go:471] \"Processing object\" object=\"container-probe-5779/startup-f5ae775e-9219-41ba-8fba-7ddee55c8db3\" objectUID=1c8df4b3-475a-4647-afd6-96cd21298ce0 kind=\"CiliumEndpoint\" virtual=false\nI0912 13:44:00.180340       1 namespace_controller.go:185] Namespace has been deleted subpath-420\nI0912 13:44:00.195547       1 garbagecollector.go:580] \"Deleting object\" object=\"container-probe-5779/startup-f5ae775e-9219-41ba-8fba-7ddee55c8db3\" objectUID=1c8df4b3-475a-4647-afd6-96cd21298ce0 kind=\"CiliumEndpoint\" propagationPolicy=Background\nI0912 13:44:00.313115       1 job_controller.go:406] enqueueing job cronjob-696/concurrent-27190904\nI0912 13:44:00.314788       1 event.go:294] \"Event occurred\" object=\"cronjob-696/concurrent\" kind=\"CronJob\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job concurrent-27190904\"\nI0912 13:44:00.364653       1 cronjob_controllerv2.go:193] \"Error cleaning up jobs\" cronjob=\"cronjob-696/concurrent\" resourceVersion=\"27106\" err=\"Operation cannot be fulfilled on cronjobs.batch \\\"concurrent\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0912 13:44:00.364741       1 cronjob_controllerv2.go:154] error syncing CronJobController cronjob-696/concurrent, requeuing: Operation cannot be fulfilled on cronjobs.batch \"concurrent\": the object has been modified; please apply your changes to the latest version and try again\nI0912 13:44:00.390022       1 job_controller.go:406] enqueueing job cronjob-696/concurrent-27190904\nI0912 13:44:00.390804       1 event.go:294] \"Event occurred\" object=\"cronjob-696/concurrent-27190904\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: concurrent-27190904--1-g6llx\"\nI0912 13:44:00.423617       1 job_controller.go:406] enqueueing job cronjob-696/concurrent-27190904\nI0912 13:44:00.424284       1 job_controller.go:406] enqueueing job cronjob-696/concurrent-27190904\nE0912 13:44:00.886576       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-5801/default: secrets \"default-token-5vsv7\" is forbidden: unable to create new content in namespace replication-controller-5801 because it is being terminated\nI0912 13:44:01.559803       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8423-4358/csi-mockplugin-7646f788cb\" objectUID=9aa56101-bebd-4c29-9518-63024ce54f46 kind=\"ControllerRevision\" virtual=false\nI0912 13:44:01.559931       1 stateful_set.go:440] StatefulSet has been deleted csi-mock-volumes-8423-4358/csi-mockplugin\nI0912 13:44:01.559989       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-8423-4358/csi-mockplugin-0\" objectUID=12cc8ea7-28f6-451e-8e3c-3d133832fa4e kind=\"Pod\" virtual=false\nI0912 13:44:01.563075       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8423-4358/csi-mockplugin-0\" objectUID=12cc8ea7-28f6-451e-8e3c-3d133832fa4e kind=\"Pod\" propagationPolicy=Background\nI0912 13:44:01.563076       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-8423-4358/csi-mockplugin-7646f788cb\" objectUID=9aa56101-bebd-4c29-9518-63024ce54f46 kind=\"ControllerRevision\" propagationPolicy=Background\nI0912 13:44:01.952010       1 garbagecollector.go:471] \"Processing object\" object=\"cronjob-696/concurrent-27190904\" objectUID=37bd4002-ef1a-40ab-a288-52af2e1ae6d7 kind=\"Job\" virtual=false\nI0912 13:44:01.964853       1 garbagecollector.go:580] \"Deleting object\" object=\"cronjob-696/concurrent-27190904\" objectUID=37bd4002-ef1a-40ab-a288-52af2e1ae6d7 kind=\"Job\" propagationPolicy=Background\nI0912 13:44:01.978621       1 garbagecollector.go:471] \"Processing object\" object=\"