This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-31 07:54
Elapsed29m40s
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 125 lines ...
I0731 07:55:17.208688    4073 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt
I0731 07:55:17.210568    4073 http.go:37] curl https://storage.googleapis.com/kops-ci/bin/1.22.0-alpha.3+v1.22.0-alpha.2-114-g345ef59498/linux/amd64/kops
I0731 07:55:18.204397    4073 up.go:43] Cleaning up any leaked resources from previous cluster
I0731 07:55:18.204446    4073 dumplogs.go:38] /logs/artifacts/717b22a0-f1d4-11eb-9ef5-1a6369567a27/kops toolbox dump --name e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user core
I0731 07:55:18.225192    4093 featureflag.go:175] FeatureFlag "SpecOverrideFlag"=true
I0731 07:55:18.226239    4093 featureflag.go:175] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io" not found
W0731 07:55:18.738820    4073 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0731 07:55:18.738925    4073 down.go:48] /logs/artifacts/717b22a0-f1d4-11eb-9ef5-1a6369567a27/kops delete cluster --name e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --yes
I0731 07:55:18.765377    4103 featureflag.go:175] FeatureFlag "SpecOverrideFlag"=true
I0731 07:55:18.765697    4103 featureflag.go:175] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io" not found
I0731 07:55:19.316190    4073 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/31 07:55:19 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0731 07:55:19.324131    4073 http.go:37] curl https://ip.jsb.workers.dev
I0731 07:55:19.425537    4073 up.go:144] /logs/artifacts/717b22a0-f1d4-11eb-9ef5-1a6369567a27/kops create cluster --name e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.3 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=075585003325/Flatcar-stable-2765.2.6-hvm --channel=alpha --networking=kubenet --container-runtime=docker --admin-access 35.238.210.34/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0731 07:55:19.448649    4111 featureflag.go:175] FeatureFlag "SpecOverrideFlag"=true
I0731 07:55:19.448769    4111 featureflag.go:175] FeatureFlag "AlphaAllowGCE"=true
I0731 07:55:19.498538    4111 create_cluster.go:825] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0731 07:55:20.000035    4111 new_cluster.go:1054]  Cloud Provider ID = aws
... skipping 41 lines ...

I0731 07:55:44.637642    4073 up.go:181] /logs/artifacts/717b22a0-f1d4-11eb-9ef5-1a6369567a27/kops validate cluster --name e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --count 10 --wait 20m0s
I0731 07:55:44.657523    4132 featureflag.go:175] FeatureFlag "SpecOverrideFlag"=true
I0731 07:55:44.657642    4132 featureflag.go:175] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io

W0731 07:55:45.892371    4132 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:55:55.939868    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:56:05.978962    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
W0731 07:56:16.009803    4132 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:56:26.056770    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:56:36.104154    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:56:46.134558    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:56:56.165871    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:57:06.214194    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:57:16.245380    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:57:26.277838    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:57:36.310089    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:57:46.361650    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:57:56.405638    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:58:06.451896    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
W0731 07:58:16.471607    4132 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:58:26.517356    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:58:36.561665    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:58:46.610035    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:58:56.653111    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0731 07:59:06.684092    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...
Machine	i-02a39ac8c52407743				machine "i-02a39ac8c52407743" has not yet joined cluster
Machine	i-083b0055c35d5e50a				machine "i-083b0055c35d5e50a" has not yet joined cluster
Machine	i-0eccb4b5dfe1d0b8e				machine "i-0eccb4b5dfe1d0b8e" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-kl5vt		system-cluster-critical pod "coredns-5dc785954d-kl5vt" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-lx595	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-lx595" is pending

Validation Failed
W0731 07:59:19.460700    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 7 lines ...
Machine	i-02a39ac8c52407743				machine "i-02a39ac8c52407743" has not yet joined cluster
Machine	i-083b0055c35d5e50a				machine "i-083b0055c35d5e50a" has not yet joined cluster
Machine	i-0eccb4b5dfe1d0b8e				machine "i-0eccb4b5dfe1d0b8e" has not yet joined cluster
Pod	kube-system/coredns-5dc785954d-kl5vt		system-cluster-critical pod "coredns-5dc785954d-kl5vt" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-lx595	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-lx595" is pending

Validation Failed
W0731 07:59:31.412720    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 8 lines ...
Machine	i-083b0055c35d5e50a				machine "i-083b0055c35d5e50a" has not yet joined cluster
Machine	i-0eccb4b5dfe1d0b8e				machine "i-0eccb4b5dfe1d0b8e" has not yet joined cluster
Node	ip-172-20-58-77.eu-west-2.compute.internal	node "ip-172-20-58-77.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-kl5vt		system-cluster-critical pod "coredns-5dc785954d-kl5vt" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-lx595	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-lx595" is pending

Validation Failed
W0731 07:59:43.237062    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 9 lines ...
Machine	i-02a39ac8c52407743				machine "i-02a39ac8c52407743" has not yet joined cluster
Node	ip-172-20-51-93.eu-west-2.compute.internal	node "ip-172-20-51-93.eu-west-2.compute.internal" of role "node" is not ready
Node	ip-172-20-61-108.eu-west-2.compute.internal	node "ip-172-20-61-108.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-5dc785954d-kl5vt		system-cluster-critical pod "coredns-5dc785954d-kl5vt" is pending
Pod	kube-system/coredns-autoscaler-84d4cfd89c-lx595	system-cluster-critical pod "coredns-autoscaler-84d4cfd89c-lx595" is pending

Validation Failed
W0731 07:59:55.139572    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...

VALIDATION ERRORS
KIND	NAME									MESSAGE
Machine	i-02a39ac8c52407743							machine "i-02a39ac8c52407743" has not yet joined cluster
Pod	kube-system/kube-proxy-ip-172-20-54-176.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-54-176.eu-west-2.compute.internal" is pending

Validation Failed
W0731 08:00:07.046498    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...
ip-172-20-61-108.eu-west-2.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-51-93.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-51-93.eu-west-2.compute.internal" is pending

Validation Failed
W0731 08:00:18.933227    4132 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 633 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 205 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:02:50.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9504" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0731 08:02:49.493346    4734 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 31 08:02:49.493: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 31 08:02:49.798: INFO: Waiting up to 5m0s for pod "pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59" in namespace "emptydir-7860" to be "Succeeded or Failed"
Jul 31 08:02:49.899: INFO: Pod "pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59": Phase="Pending", Reason="", readiness=false. Elapsed: 101.262774ms
Jul 31 08:02:52.002: INFO: Pod "pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20366949s
Jul 31 08:02:54.109: INFO: Pod "pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311369424s
Jul 31 08:02:56.212: INFO: Pod "pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414056316s
STEP: Saw pod success
Jul 31 08:02:56.212: INFO: Pod "pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59" satisfied condition "Succeeded or Failed"
Jul 31 08:02:56.314: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59 container test-container: <nil>
STEP: delete the pod
Jul 31 08:02:57.042: INFO: Waiting for pod pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59 to disappear
Jul 31 08:02:57.143: INFO: Pod pod-8e8e66ea-cf7a-42fd-9027-7fccf3495d59 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.662 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Jul 31 08:02:47.935: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-1eddcd01-d49a-44c6-9152-bd5b26b48696
STEP: Creating a pod to test consume secrets
Jul 31 08:02:50.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07" in namespace "projected-1992" to be "Succeeded or Failed"
Jul 31 08:02:50.549: INFO: Pod "pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07": Phase="Pending", Reason="", readiness=false. Elapsed: 101.508117ms
Jul 31 08:02:52.650: INFO: Pod "pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203072762s
Jul 31 08:02:54.753: INFO: Pod "pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30558804s
Jul 31 08:02:56.854: INFO: Pod "pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.407308781s
STEP: Saw pod success
Jul 31 08:02:56.855: INFO: Pod "pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07" satisfied condition "Succeeded or Failed"
Jul 31 08:02:56.957: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:02:57.168: INFO: Waiting for pod pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07 to disappear
Jul 31 08:02:57.269: INFO: Pod pod-projected-secrets-ce2698fa-bbdd-45b2-997a-78e81804ff07 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 5 lines ...
• [SLOW TEST:10.059 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":1,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:02:57.601: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 162 lines ...
Jul 31 08:02:48.092: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-5873492e-dcb0-410a-b414-ca8c30593f0e
STEP: Creating a pod to test consume configMaps
Jul 31 08:02:48.533: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d" in namespace "projected-8336" to be "Succeeded or Failed"
Jul 31 08:02:48.638: INFO: Pod "pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d": Phase="Pending", Reason="", readiness=false. Elapsed: 104.56102ms
Jul 31 08:02:50.740: INFO: Pod "pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207015749s
Jul 31 08:02:52.843: INFO: Pod "pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310149418s
Jul 31 08:02:54.947: INFO: Pod "pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413928281s
Jul 31 08:02:57.050: INFO: Pod "pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.516723193s
STEP: Saw pod success
Jul 31 08:02:57.050: INFO: Pod "pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d" satisfied condition "Succeeded or Failed"
Jul 31 08:02:57.152: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:02:57.376: INFO: Waiting for pod pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d to disappear
Jul 31 08:02:57.477: INFO: Pod pod-projected-configmaps-3c5dca9a-ba58-49e6-98af-d1f170c4988d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.112 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:59
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:02:57.796: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 24 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 31 08:02:49.494: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7" in namespace "downward-api-4739" to be "Succeeded or Failed"
Jul 31 08:02:49.596: INFO: Pod "downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 101.224696ms
Jul 31 08:02:51.700: INFO: Pod "downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205818956s
Jul 31 08:02:53.810: INFO: Pod "downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315197374s
Jul 31 08:02:55.913: INFO: Pod "downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418027565s
Jul 31 08:02:58.014: INFO: Pod "downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.519715684s
STEP: Saw pod success
Jul 31 08:02:58.014: INFO: Pod "downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7" satisfied condition "Succeeded or Failed"
Jul 31 08:02:58.116: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7 container client-container: <nil>
STEP: delete the pod
Jul 31 08:02:58.354: INFO: Waiting for pod downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7 to disappear
Jul 31 08:02:58.468: INFO: Pod downwardapi-volume-3a0d56c6-7b25-414c-9426-4e3bbd1ea4a7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.018 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:02:58.812: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 115 lines ...
W0731 08:02:48.593140    4876 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 31 08:02:48.593: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Jul 31 08:02:48.898: INFO: Waiting up to 5m0s for pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49" in namespace "svcaccounts-1869" to be "Succeeded or Failed"
Jul 31 08:02:48.999: INFO: Pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 101.286348ms
Jul 31 08:02:51.104: INFO: Pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20614271s
Jul 31 08:02:53.206: INFO: Pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307986243s
Jul 31 08:02:55.309: INFO: Pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410845034s
Jul 31 08:02:57.410: INFO: Pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512569196s
Jul 31 08:02:59.512: INFO: Pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.614777433s
STEP: Saw pod success
Jul 31 08:02:59.513: INFO: Pod "test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49" satisfied condition "Succeeded or Failed"
Jul 31 08:02:59.614: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49 container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:02:59.826: INFO: Waiting for pod test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49 to disappear
Jul 31 08:02:59.928: INFO: Pod test-pod-b1893e0d-62f2-4242-a7db-7d727c06cd49 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.504 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:00.252: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:00.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3929" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:00.311: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 7 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 31 08:02:51.092: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22" in namespace "downward-api-1361" to be "Succeeded or Failed"
Jul 31 08:02:51.193: INFO: Pod "downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22": Phase="Pending", Reason="", readiness=false. Elapsed: 101.225534ms
Jul 31 08:02:53.299: INFO: Pod "downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206610349s
Jul 31 08:02:55.400: INFO: Pod "downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308079895s
Jul 31 08:02:57.503: INFO: Pod "downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41031055s
Jul 31 08:02:59.604: INFO: Pod "downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.512140189s
STEP: Saw pod success
Jul 31 08:02:59.605: INFO: Pod "downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22" satisfied condition "Succeeded or Failed"
Jul 31 08:02:59.706: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22 container client-container: <nil>
STEP: delete the pod
Jul 31 08:03:00.659: INFO: Waiting for pod downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22 to disappear
Jul 31 08:03:00.760: INFO: Pod downwardapi-volume-bba694f5-c6c3-47ec-b186-ecdd814eaa22 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.607 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:00.995: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:00.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-354" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:01.117: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:01.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5657" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":3,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 7 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
Jul 31 08:02:48.362: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 31 08:02:48.362: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-ztrb
STEP: Creating a pod to test subpath
Jul 31 08:02:48.467: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-ztrb" in namespace "provisioning-6626" to be "Succeeded or Failed"
Jul 31 08:02:48.569: INFO: Pod "pod-subpath-test-inlinevolume-ztrb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.546444ms
Jul 31 08:02:50.673: INFO: Pod "pod-subpath-test-inlinevolume-ztrb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205901597s
Jul 31 08:02:52.775: INFO: Pod "pod-subpath-test-inlinevolume-ztrb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308625403s
Jul 31 08:02:54.879: INFO: Pod "pod-subpath-test-inlinevolume-ztrb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412681913s
Jul 31 08:02:56.983: INFO: Pod "pod-subpath-test-inlinevolume-ztrb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515853888s
Jul 31 08:02:59.089: INFO: Pod "pod-subpath-test-inlinevolume-ztrb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.62233198s
Jul 31 08:03:01.196: INFO: Pod "pod-subpath-test-inlinevolume-ztrb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.729597321s
STEP: Saw pod success
Jul 31 08:03:01.196: INFO: Pod "pod-subpath-test-inlinevolume-ztrb" satisfied condition "Succeeded or Failed"
Jul 31 08:03:01.299: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-ztrb container test-container-subpath-inlinevolume-ztrb: <nil>
STEP: delete the pod
Jul 31 08:03:01.517: INFO: Waiting for pod pod-subpath-test-inlinevolume-ztrb to disappear
Jul 31 08:03:01.619: INFO: Pod pod-subpath-test-inlinevolume-ztrb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-ztrb
Jul 31 08:03:01.619: INFO: Deleting pod "pod-subpath-test-inlinevolume-ztrb" in namespace "provisioning-6626"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:02.058: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
W0731 08:02:48.289620    4684 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 31 08:02:48.289: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 31 08:02:48.627: INFO: Waiting up to 5m0s for pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18" in namespace "downward-api-3259" to be "Succeeded or Failed"
Jul 31 08:02:48.728: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18": Phase="Pending", Reason="", readiness=false. Elapsed: 101.297495ms
Jul 31 08:02:50.830: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203034753s
Jul 31 08:02:52.937: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309680427s
Jul 31 08:02:55.040: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412498076s
Jul 31 08:02:57.142: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514357326s
Jul 31 08:02:59.245: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617335962s
Jul 31 08:03:01.348: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.720367067s
STEP: Saw pod success
Jul 31 08:03:01.348: INFO: Pod "downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18" satisfied condition "Succeeded or Failed"
Jul 31 08:03:01.450: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18 container dapi-container: <nil>
STEP: delete the pod
Jul 31 08:03:01.664: INFO: Waiting for pod downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18 to disappear
Jul 31 08:03:01.781: INFO: Pod downward-api-a16e6c49-14f4-4c0a-b084-7482733d2c18 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:02.098: INFO: Only supported for providers [vsphere] (not aws)
... skipping 302 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:02.823: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 244 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:05.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7510" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":3,"skipped":15,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 31 08:02:59.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019" in namespace "downward-api-4886" to be "Succeeded or Failed"
Jul 31 08:02:59.638: INFO: Pod "downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019": Phase="Pending", Reason="", readiness=false. Elapsed: 101.102ms
Jul 31 08:03:01.755: INFO: Pod "downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218277666s
Jul 31 08:03:03.859: INFO: Pod "downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322107523s
Jul 31 08:03:05.962: INFO: Pod "downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.425650063s
STEP: Saw pod success
Jul 31 08:03:05.962: INFO: Pod "downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019" satisfied condition "Succeeded or Failed"
Jul 31 08:03:06.064: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019 container client-container: <nil>
STEP: delete the pod
Jul 31 08:03:06.274: INFO: Waiting for pod downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019 to disappear
Jul 31 08:03:06.376: INFO: Pod downwardapi-volume-9b4cc262-4256-443c-ad3d-62df3e281019 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.657 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":30,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:06.971: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 49 lines ...
• [SLOW TEST:10.895 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:12.660: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:13.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-220" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":2,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:13.663: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 83 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:436
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:17.807: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
Jul 31 08:03:01.737: INFO: PersistentVolumeClaim pvc-hjsd5 found but phase is Pending instead of Bound.
Jul 31 08:03:03.841: INFO: PersistentVolumeClaim pvc-hjsd5 found and phase=Bound (2.224078266s)
Jul 31 08:03:03.841: INFO: Waiting up to 3m0s for PersistentVolume local-p4t9v to have phase Bound
Jul 31 08:03:03.943: INFO: PersistentVolume local-p4t9v found and phase=Bound (102.587064ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ctfj
STEP: Creating a pod to test subpath
Jul 31 08:03:04.252: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ctfj" in namespace "provisioning-7663" to be "Succeeded or Failed"
Jul 31 08:03:04.359: INFO: Pod "pod-subpath-test-preprovisionedpv-ctfj": Phase="Pending", Reason="", readiness=false. Elapsed: 106.594428ms
Jul 31 08:03:06.465: INFO: Pod "pod-subpath-test-preprovisionedpv-ctfj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212237622s
Jul 31 08:03:08.570: INFO: Pod "pod-subpath-test-preprovisionedpv-ctfj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317210702s
Jul 31 08:03:10.674: INFO: Pod "pod-subpath-test-preprovisionedpv-ctfj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421662407s
Jul 31 08:03:12.779: INFO: Pod "pod-subpath-test-preprovisionedpv-ctfj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52641268s
Jul 31 08:03:14.883: INFO: Pod "pod-subpath-test-preprovisionedpv-ctfj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.630528344s
STEP: Saw pod success
Jul 31 08:03:14.883: INFO: Pod "pod-subpath-test-preprovisionedpv-ctfj" satisfied condition "Succeeded or Failed"
Jul 31 08:03:14.986: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-ctfj container test-container-volume-preprovisionedpv-ctfj: <nil>
STEP: delete the pod
Jul 31 08:03:15.221: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ctfj to disappear
Jul 31 08:03:15.324: INFO: Pod pod-subpath-test-preprovisionedpv-ctfj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ctfj
Jul 31 08:03:15.324: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ctfj" in namespace "provisioning-7663"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:19.098: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 79 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:19.881: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:20.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2201" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:45.476 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:244
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:33.271: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-tcsv
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:03:09.567: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tcsv" in namespace "subpath-7854" to be "Succeeded or Failed"
Jul 31 08:03:09.669: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Pending", Reason="", readiness=false. Elapsed: 101.991443ms
Jul 31 08:03:11.775: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20833402s
Jul 31 08:03:13.879: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311587432s
Jul 31 08:03:15.981: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414437035s
Jul 31 08:03:18.085: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.518162162s
Jul 31 08:03:20.188: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Running", Reason="", readiness=true. Elapsed: 10.621344129s
Jul 31 08:03:22.291: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Running", Reason="", readiness=true. Elapsed: 12.723856637s
Jul 31 08:03:24.395: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Running", Reason="", readiness=true. Elapsed: 14.828033719s
Jul 31 08:03:26.502: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Running", Reason="", readiness=true. Elapsed: 16.93542563s
Jul 31 08:03:28.606: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Running", Reason="", readiness=true. Elapsed: 19.039056966s
Jul 31 08:03:30.710: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Running", Reason="", readiness=true. Elapsed: 21.142531579s
Jul 31 08:03:32.814: INFO: Pod "pod-subpath-test-projected-tcsv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.247214099s
STEP: Saw pod success
Jul 31 08:03:32.814: INFO: Pod "pod-subpath-test-projected-tcsv" satisfied condition "Succeeded or Failed"
Jul 31 08:03:32.918: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-subpath-test-projected-tcsv container test-container-subpath-projected-tcsv: <nil>
STEP: delete the pod
Jul 31 08:03:33.148: INFO: Waiting for pod pod-subpath-test-projected-tcsv to disappear
Jul 31 08:03:33.250: INFO: Pod pod-subpath-test-projected-tcsv no longer exists
STEP: Deleting pod pod-subpath-test-projected-tcsv
Jul 31 08:03:33.250: INFO: Deleting pod "pod-subpath-test-projected-tcsv" in namespace "subpath-7854"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:33.587: INFO: Only supported for providers [vsphere] (not aws)
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 31 08:03:20.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613" in namespace "projected-6748" to be "Succeeded or Failed"
Jul 31 08:03:20.633: INFO: Pod "downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613": Phase="Pending", Reason="", readiness=false. Elapsed: 103.013273ms
Jul 31 08:03:22.735: INFO: Pod "downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20529586s
Jul 31 08:03:24.840: INFO: Pod "downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309807359s
Jul 31 08:03:26.943: INFO: Pod "downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.412803506s
STEP: Saw pod success
Jul 31 08:03:26.943: INFO: Pod "downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613" satisfied condition "Succeeded or Failed"
Jul 31 08:03:27.044: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613 container client-container: <nil>
STEP: delete the pod
Jul 31 08:03:27.260: INFO: Waiting for pod downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613 to disappear
Jul 31 08:03:27.366: INFO: Pod downwardapi-volume-8f6fb520-233a-4241-bc81-715bd8025613 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 7 lines ...
• [SLOW TEST:13.768 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":3,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:34.099: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:34.159: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:03:33.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 87 lines ...
W0731 08:02:48.181914    4872 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 31 08:02:48.182: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul 31 08:02:48.383: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 31 08:02:48.690: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9532" in namespace "provisioning-9532" to be "Succeeded or Failed"
Jul 31 08:02:48.791: INFO: Pod "hostpath-symlink-prep-provisioning-9532": Phase="Pending", Reason="", readiness=false. Elapsed: 100.926025ms
Jul 31 08:02:50.895: INFO: Pod "hostpath-symlink-prep-provisioning-9532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205724691s
Jul 31 08:02:52.998: INFO: Pod "hostpath-symlink-prep-provisioning-9532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308081717s
STEP: Saw pod success
Jul 31 08:02:52.998: INFO: Pod "hostpath-symlink-prep-provisioning-9532" satisfied condition "Succeeded or Failed"
Jul 31 08:02:52.998: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9532" in namespace "provisioning-9532"
Jul 31 08:02:53.103: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9532" to be fully deleted
Jul 31 08:02:53.204: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-n54c
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:02:53.306: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-n54c" in namespace "provisioning-9532" to be "Succeeded or Failed"
Jul 31 08:02:53.407: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Pending", Reason="", readiness=false. Elapsed: 101.722579ms
Jul 31 08:02:55.510: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204105019s
Jul 31 08:02:57.612: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 4.306129854s
Jul 31 08:02:59.715: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 6.409000309s
Jul 31 08:03:01.819: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 8.512779044s
Jul 31 08:03:03.922: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 10.615987573s
... skipping 3 lines ...
Jul 31 08:03:12.334: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 19.028501124s
Jul 31 08:03:14.437: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 21.131170723s
Jul 31 08:03:16.539: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 23.233387093s
Jul 31 08:03:18.652: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Running", Reason="", readiness=true. Elapsed: 25.345885642s
Jul 31 08:03:20.754: INFO: Pod "pod-subpath-test-inlinevolume-n54c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.448700295s
STEP: Saw pod success
Jul 31 08:03:20.755: INFO: Pod "pod-subpath-test-inlinevolume-n54c" satisfied condition "Succeeded or Failed"
Jul 31 08:03:20.857: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-n54c container test-container-subpath-inlinevolume-n54c: <nil>
STEP: delete the pod
Jul 31 08:03:21.078: INFO: Waiting for pod pod-subpath-test-inlinevolume-n54c to disappear
Jul 31 08:03:21.179: INFO: Pod pod-subpath-test-inlinevolume-n54c no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-n54c
Jul 31 08:03:21.179: INFO: Deleting pod "pod-subpath-test-inlinevolume-n54c" in namespace "provisioning-9532"
STEP: Deleting pod
Jul 31 08:03:21.280: INFO: Deleting pod "pod-subpath-test-inlinevolume-n54c" in namespace "provisioning-9532"
Jul 31 08:03:21.488: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9532" in namespace "provisioning-9532" to be "Succeeded or Failed"
Jul 31 08:03:21.592: INFO: Pod "hostpath-symlink-prep-provisioning-9532": Phase="Pending", Reason="", readiness=false. Elapsed: 103.426133ms
Jul 31 08:03:23.700: INFO: Pod "hostpath-symlink-prep-provisioning-9532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211862864s
Jul 31 08:03:25.802: INFO: Pod "hostpath-symlink-prep-provisioning-9532": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313953141s
Jul 31 08:03:27.906: INFO: Pod "hostpath-symlink-prep-provisioning-9532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.417470688s
STEP: Saw pod success
Jul 31 08:03:27.906: INFO: Pod "hostpath-symlink-prep-provisioning-9532" satisfied condition "Succeeded or Failed"
Jul 31 08:03:27.906: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9532" in namespace "provisioning-9532"
Jul 31 08:03:28.013: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9532" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:28.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
Jul 31 08:03:28.227: INFO: Condition Ready of node ip-172-20-51-93.eu-west-2.compute.internal is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2021-07-31 08:03:20 +0000 UTC} {node.kubernetes.io/not-ready  NoExecute 2021-07-31 08:03:22 +0000 UTC}]. Failure
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":7,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:34.468: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
Jul 31 08:03:00.632: INFO: PersistentVolumeClaim pvc-n9g5r found but phase is Pending instead of Bound.
Jul 31 08:03:02.737: INFO: PersistentVolumeClaim pvc-n9g5r found and phase=Bound (4.311327931s)
Jul 31 08:03:02.737: INFO: Waiting up to 3m0s for PersistentVolume local-hrbmp to have phase Bound
Jul 31 08:03:02.840: INFO: PersistentVolume local-hrbmp found and phase=Bound (102.887474ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9zsx
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:03:03.154: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9zsx" in namespace "provisioning-3195" to be "Succeeded or Failed"
Jul 31 08:03:03.257: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Pending", Reason="", readiness=false. Elapsed: 102.634588ms
Jul 31 08:03:05.365: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21088513s
Jul 31 08:03:07.469: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314666505s
Jul 31 08:03:09.575: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Running", Reason="", readiness=true. Elapsed: 6.42050616s
Jul 31 08:03:11.680: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Running", Reason="", readiness=true. Elapsed: 8.526304933s
Jul 31 08:03:13.785: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Running", Reason="", readiness=true. Elapsed: 10.631245666s
... skipping 2 lines ...
Jul 31 08:03:20.102: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Running", Reason="", readiness=true. Elapsed: 16.948089904s
Jul 31 08:03:22.206: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Running", Reason="", readiness=true. Elapsed: 19.052404899s
Jul 31 08:03:24.313: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Running", Reason="", readiness=true. Elapsed: 21.159127951s
Jul 31 08:03:26.417: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Running", Reason="", readiness=true. Elapsed: 23.263125313s
Jul 31 08:03:28.522: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.367991909s
STEP: Saw pod success
Jul 31 08:03:28.522: INFO: Pod "pod-subpath-test-preprovisionedpv-9zsx" satisfied condition "Succeeded or Failed"
Jul 31 08:03:28.625: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-9zsx container test-container-subpath-preprovisionedpv-9zsx: <nil>
STEP: delete the pod
Jul 31 08:03:28.843: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9zsx to disappear
Jul 31 08:03:28.947: INFO: Pod pod-subpath-test-preprovisionedpv-9zsx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9zsx
Jul 31 08:03:28.947: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9zsx" in namespace "provisioning-3195"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should be able to pull image [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
Jul 31 08:03:31.901: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba: the server could not find the requested resource (get pods dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba)
Jul 31 08:03:32.005: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba: the server could not find the requested resource (get pods dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba)
Jul 31 08:03:32.107: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-8172.svc.cluster.local from pod dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba: the server could not find the requested resource (get pods dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba)
Jul 31 08:03:32.210: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba: the server could not find the requested resource (get pods dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba)
Jul 31 08:03:32.313: INFO: Unable to read jessie_udp@PodARecord from pod dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba: the server could not find the requested resource (get pods dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba)
Jul 31 08:03:32.416: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba: the server could not find the requested resource (get pods dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba)
Jul 31 08:03:32.416: INFO: Lookups using dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_hosts@dns-querier-1.dns-test-service.dns-8172.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul 31 08:03:39.255: INFO: DNS probes using dns-8172/dns-test-3ccd4d9c-bfc2-4ecc-8156-d6d0df9827ba succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:21.848 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":2,"skipped":10,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:39.725: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 135 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-c79af15f-3a37-4a5a-8fc2-59960329ca2b
STEP: Creating a pod to test consume configMaps
Jul 31 08:03:34.842: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e" in namespace "projected-7531" to be "Succeeded or Failed"
Jul 31 08:03:34.949: INFO: Pod "pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e": Phase="Pending", Reason="", readiness=false. Elapsed: 106.533906ms
Jul 31 08:03:37.053: INFO: Pod "pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210899188s
Jul 31 08:03:39.211: INFO: Pod "pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.368666874s
STEP: Saw pod success
Jul 31 08:03:39.211: INFO: Pod "pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e" satisfied condition "Succeeded or Failed"
Jul 31 08:03:39.400: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:03:39.668: INFO: Waiting for pod pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e to disappear
Jul 31 08:03:39.771: INFO: Pod pod-projected-configmaps-f23e6fbe-6d32-4825-8daa-a1da912d225e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.888 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:40.011: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 21 lines ...
Jul 31 08:03:34.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul 31 08:03:34.939: INFO: Waiting up to 5m0s for pod "security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19" in namespace "security-context-7750" to be "Succeeded or Failed"
Jul 31 08:03:35.042: INFO: Pod "security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19": Phase="Pending", Reason="", readiness=false. Elapsed: 102.866351ms
Jul 31 08:03:37.144: INFO: Pod "security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205421353s
Jul 31 08:03:39.270: INFO: Pod "security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331193973s
STEP: Saw pod success
Jul 31 08:03:39.270: INFO: Pod "security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19" satisfied condition "Succeeded or Failed"
Jul 31 08:03:39.420: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19 container test-container: <nil>
STEP: delete the pod
Jul 31 08:03:39.683: INFO: Waiting for pod security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19 to disappear
Jul 31 08:03:39.793: INFO: Pod security-context-1de1a049-02e5-4324-b2ce-6ae5933a0d19 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 27 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:6.950 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-1281" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:40.732: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 29 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Jul 31 08:03:34.275: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-9ee52c6e-49a8-45bb-b2d1-0881a55633dc" in namespace "security-context-test-4360" to be "Succeeded or Failed"
Jul 31 08:03:34.378: INFO: Pod "alpine-nnp-true-9ee52c6e-49a8-45bb-b2d1-0881a55633dc": Phase="Pending", Reason="", readiness=false. Elapsed: 103.274028ms
Jul 31 08:03:36.482: INFO: Pod "alpine-nnp-true-9ee52c6e-49a8-45bb-b2d1-0881a55633dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206662723s
Jul 31 08:03:38.643: INFO: Pod "alpine-nnp-true-9ee52c6e-49a8-45bb-b2d1-0881a55633dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368382229s
Jul 31 08:03:40.747: INFO: Pod "alpine-nnp-true-9ee52c6e-49a8-45bb-b2d1-0881a55633dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.472136529s
Jul 31 08:03:40.747: INFO: Pod "alpine-nnp-true-9ee52c6e-49a8-45bb-b2d1-0881a55633dc" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:40.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4360" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":19,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:41.090: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 123 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Jul 31 08:03:03.665: INFO: Waiting up to 5m0s for pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711" in namespace "pods-1335" to be "Succeeded or Failed"
Jul 31 08:03:03.766: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 101.759747ms
Jul 31 08:03:05.869: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204752938s
Jul 31 08:03:07.974: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30900603s
Jul 31 08:03:10.078: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413164657s
Jul 31 08:03:12.181: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516755739s
Jul 31 08:03:14.285: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620078896s
... skipping 7 lines ...
Jul 31 08:03:31.155: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 27.490321052s
Jul 31 08:03:33.260: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 29.595513502s
Jul 31 08:03:35.367: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 31.701897746s
Jul 31 08:03:37.469: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Pending", Reason="", readiness=false. Elapsed: 33.804107009s
Jul 31 08:03:39.579: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.91435179s
STEP: Saw pod success
Jul 31 08:03:39.579: INFO: Pod "pod-always-succeedd082908e-ffc1-481d-80c5-46c1dd30b711" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:41.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":2,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:42.027: INFO: Only supported for providers [vsphere] (not aws)
... skipping 23 lines ...
Jul 31 08:03:34.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 31 08:03:35.149: INFO: Waiting up to 5m0s for pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc" in namespace "emptydir-5086" to be "Succeeded or Failed"
Jul 31 08:03:35.251: INFO: Pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 101.90881ms
Jul 31 08:03:37.365: INFO: Pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216030936s
Jul 31 08:03:39.487: INFO: Pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338632267s
Jul 31 08:03:41.603: INFO: Pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453893114s
Jul 31 08:03:43.710: INFO: Pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.560819173s
Jul 31 08:03:45.812: INFO: Pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.663059914s
STEP: Saw pod success
Jul 31 08:03:45.812: INFO: Pod "pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc" satisfied condition "Succeeded or Failed"
Jul 31 08:03:45.918: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc container test-container: <nil>
STEP: delete the pod
Jul 31 08:03:46.183: INFO: Waiting for pod pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc to disappear
Jul 31 08:03:46.291: INFO: Pod pod-4c3fd378-313c-497b-b10a-b31e89c3c2fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 67 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":33,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:47.781: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-5167/configmap-test-ab42c2f9-4594-4397-997e-e2ded7f17a41
STEP: Creating a pod to test consume configMaps
Jul 31 08:03:42.265: INFO: Waiting up to 5m0s for pod "pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702" in namespace "configmap-5167" to be "Succeeded or Failed"
Jul 31 08:03:42.367: INFO: Pod "pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702": Phase="Pending", Reason="", readiness=false. Elapsed: 101.757121ms
Jul 31 08:03:44.469: INFO: Pod "pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204629977s
Jul 31 08:03:46.614: INFO: Pod "pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702": Phase="Pending", Reason="", readiness=false. Elapsed: 4.349728859s
Jul 31 08:03:48.719: INFO: Pod "pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.454338409s
STEP: Saw pod success
Jul 31 08:03:48.719: INFO: Pod "pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702" satisfied condition "Succeeded or Failed"
Jul 31 08:03:48.821: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702 container env-test: <nil>
STEP: delete the pod
Jul 31 08:03:49.033: INFO: Waiting for pod pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702 to disappear
Jul 31 08:03:49.136: INFO: Pod pod-configmaps-1c4f6817-4b28-46cc-856b-be8a839c3702 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.821 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0}

SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:03:37.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 86 lines ...
STEP: Destroying namespace "apply-6886" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":3,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:51.171: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 150 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":12,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:03:56.465: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 98 lines ...
Jul 31 08:03:07.141: INFO: PersistentVolumeClaim csi-hostpathc8pvv found but phase is Pending instead of Bound.
Jul 31 08:03:09.244: INFO: PersistentVolumeClaim csi-hostpathc8pvv found but phase is Pending instead of Bound.
Jul 31 08:03:11.348: INFO: PersistentVolumeClaim csi-hostpathc8pvv found but phase is Pending instead of Bound.
Jul 31 08:03:13.464: INFO: PersistentVolumeClaim csi-hostpathc8pvv found and phase=Bound (6.424572713s)
STEP: Creating pod pod-subpath-test-dynamicpv-fjvv
STEP: Creating a pod to test subpath
Jul 31 08:03:13.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-fjvv" in namespace "provisioning-3759" to be "Succeeded or Failed"
Jul 31 08:03:13.871: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Pending", Reason="", readiness=false. Elapsed: 101.088372ms
Jul 31 08:03:15.974: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204018573s
Jul 31 08:03:18.076: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306113924s
Jul 31 08:03:20.177: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.407476531s
Jul 31 08:03:22.280: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510096597s
Jul 31 08:03:24.386: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.616306185s
Jul 31 08:03:26.498: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.728579491s
Jul 31 08:03:28.601: INFO: Pod "pod-subpath-test-dynamicpv-fjvv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.831189593s
STEP: Saw pod success
Jul 31 08:03:28.601: INFO: Pod "pod-subpath-test-dynamicpv-fjvv" satisfied condition "Succeeded or Failed"
Jul 31 08:03:28.703: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-fjvv container test-container-subpath-dynamicpv-fjvv: <nil>
STEP: delete the pod
Jul 31 08:03:28.925: INFO: Waiting for pod pod-subpath-test-dynamicpv-fjvv to disappear
Jul 31 08:03:29.044: INFO: Pod pod-subpath-test-dynamicpv-fjvv no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-fjvv
Jul 31 08:03:29.044: INFO: Deleting pod "pod-subpath-test-dynamicpv-fjvv" in namespace "provisioning-3759"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":69,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Jul 31 08:03:34.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul 31 08:03:35.167: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 31 08:03:35.379: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9304" in namespace "provisioning-9304" to be "Succeeded or Failed"
Jul 31 08:03:35.486: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Pending", Reason="", readiness=false. Elapsed: 106.897115ms
Jul 31 08:03:37.590: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210343453s
Jul 31 08:03:39.703: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324085138s
Jul 31 08:03:41.808: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429140301s
Jul 31 08:03:43.914: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Pending", Reason="", readiness=false. Elapsed: 8.534607276s
Jul 31 08:03:46.037: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.65815605s
STEP: Saw pod success
Jul 31 08:03:46.037: INFO: Pod "hostpath-symlink-prep-provisioning-9304" satisfied condition "Succeeded or Failed"
Jul 31 08:03:46.037: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9304" in namespace "provisioning-9304"
Jul 31 08:03:46.147: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9304" to be fully deleted
Jul 31 08:03:46.251: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qm4t
STEP: Creating a pod to test subpath
Jul 31 08:03:46.377: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qm4t" in namespace "provisioning-9304" to be "Succeeded or Failed"
Jul 31 08:03:46.485: INFO: Pod "pod-subpath-test-inlinevolume-qm4t": Phase="Pending", Reason="", readiness=false. Elapsed: 107.559186ms
Jul 31 08:03:48.590: INFO: Pod "pod-subpath-test-inlinevolume-qm4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212158407s
Jul 31 08:03:50.711: INFO: Pod "pod-subpath-test-inlinevolume-qm4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333492913s
Jul 31 08:03:52.832: INFO: Pod "pod-subpath-test-inlinevolume-qm4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.45410937s
STEP: Saw pod success
Jul 31 08:03:52.832: INFO: Pod "pod-subpath-test-inlinevolume-qm4t" satisfied condition "Succeeded or Failed"
Jul 31 08:03:52.935: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-qm4t container test-container-volume-inlinevolume-qm4t: <nil>
STEP: delete the pod
Jul 31 08:03:53.166: INFO: Waiting for pod pod-subpath-test-inlinevolume-qm4t to disappear
Jul 31 08:03:53.269: INFO: Pod pod-subpath-test-inlinevolume-qm4t no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qm4t
Jul 31 08:03:53.269: INFO: Deleting pod "pod-subpath-test-inlinevolume-qm4t" in namespace "provisioning-9304"
STEP: Deleting pod
Jul 31 08:03:53.372: INFO: Deleting pod "pod-subpath-test-inlinevolume-qm4t" in namespace "provisioning-9304"
Jul 31 08:03:53.583: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9304" in namespace "provisioning-9304" to be "Succeeded or Failed"
Jul 31 08:03:53.686: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Pending", Reason="", readiness=false. Elapsed: 103.546159ms
Jul 31 08:03:55.795: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211792383s
Jul 31 08:03:57.898: INFO: Pod "hostpath-symlink-prep-provisioning-9304": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315492323s
STEP: Saw pod success
Jul 31 08:03:57.899: INFO: Pod "hostpath-symlink-prep-provisioning-9304" satisfied condition "Succeeded or Failed"
Jul 31 08:03:57.899: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9304" in namespace "provisioning-9304"
Jul 31 08:03:58.006: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9304" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:58.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9304" for this suite.
... skipping 22 lines ...
Jul 31 08:03:01.542: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Jul 31 08:03:02.319: INFO: Successfully created a new PD: "aws://eu-west-2a/vol-091ab99a3c50bc42e".
Jul 31 08:03:02.319: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-nrk8
STEP: Creating a pod to test exec-volume-test
Jul 31 08:03:02.423: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-nrk8" in namespace "volume-6799" to be "Succeeded or Failed"
Jul 31 08:03:02.530: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 106.687783ms
Jul 31 08:03:04.633: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209691712s
Jul 31 08:03:06.746: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.323020619s
Jul 31 08:03:08.851: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428053349s
Jul 31 08:03:10.954: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531427802s
Jul 31 08:03:13.057: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.634234366s
... skipping 11 lines ...
Jul 31 08:03:38.301: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.877848788s
Jul 31 08:03:40.406: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 37.982969576s
Jul 31 08:03:42.509: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 40.086314043s
Jul 31 08:03:44.614: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Pending", Reason="", readiness=false. Elapsed: 42.190658102s
Jul 31 08:03:46.716: INFO: Pod "exec-volume-test-inlinevolume-nrk8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.293511245s
STEP: Saw pod success
Jul 31 08:03:46.717: INFO: Pod "exec-volume-test-inlinevolume-nrk8" satisfied condition "Succeeded or Failed"
Jul 31 08:03:46.820: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod exec-volume-test-inlinevolume-nrk8 container exec-container-inlinevolume-nrk8: <nil>
STEP: delete the pod
Jul 31 08:03:47.031: INFO: Waiting for pod exec-volume-test-inlinevolume-nrk8 to disappear
Jul 31 08:03:47.133: INFO: Pod exec-volume-test-inlinevolume-nrk8 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-nrk8
Jul 31 08:03:47.133: INFO: Deleting pod "exec-volume-test-inlinevolume-nrk8" in namespace "volume-6799"
Jul 31 08:03:47.452: INFO: Couldn't delete PD "aws://eu-west-2a/vol-091ab99a3c50bc42e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-091ab99a3c50bc42e is currently attached to i-083b0055c35d5e50a
	status code: 400, request id: 70dac141-5474-4f82-a6f2-7b1604645c3b
Jul 31 08:03:53.031: INFO: Couldn't delete PD "aws://eu-west-2a/vol-091ab99a3c50bc42e", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-091ab99a3c50bc42e is currently attached to i-083b0055c35d5e50a
	status code: 400, request id: f484b827-8391-4d19-bf22-837835a93130
Jul 31 08:03:58.695: INFO: Successfully deleted PD "aws://eu-west-2a/vol-091ab99a3c50bc42e".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:03:58.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6799" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
Jul 31 08:03:59.697: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.728 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 50 lines ...
• [SLOW TEST:9.179 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:00.425: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 110 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Jul 31 08:03:42.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
Jul 31 08:03:42.565: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 31 08:03:42.784: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3830" in namespace "provisioning-3830" to be "Succeeded or Failed"
Jul 31 08:03:42.886: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 102.575421ms
Jul 31 08:03:44.991: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207347097s
Jul 31 08:03:47.096: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312333147s
Jul 31 08:03:49.200: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41613535s
Jul 31 08:03:51.304: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519866393s
Jul 31 08:03:53.407: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.622959913s
STEP: Saw pod success
Jul 31 08:03:53.407: INFO: Pod "hostpath-symlink-prep-provisioning-3830" satisfied condition "Succeeded or Failed"
Jul 31 08:03:53.407: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3830" in namespace "provisioning-3830"
Jul 31 08:03:53.517: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3830" to be fully deleted
Jul 31 08:03:53.627: INFO: Creating resource for inline volume
Jul 31 08:03:53.627: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Jul 31 08:03:53.628: INFO: Deleting pod "pod-subpath-test-inlinevolume-g2n8" in namespace "provisioning-3830"
Jul 31 08:03:53.834: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3830" in namespace "provisioning-3830" to be "Succeeded or Failed"
Jul 31 08:03:53.936: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 102.529843ms
Jul 31 08:03:56.041: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207077997s
Jul 31 08:03:58.146: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311583591s
Jul 31 08:04:00.249: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415513093s
Jul 31 08:04:02.353: INFO: Pod "hostpath-symlink-prep-provisioning-3830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.519199834s
STEP: Saw pod success
Jul 31 08:04:02.353: INFO: Pod "hostpath-symlink-prep-provisioning-3830" satisfied condition "Succeeded or Failed"
Jul 31 08:04:02.353: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3830" in namespace "provisioning-3830"
Jul 31 08:04:02.460: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3830" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:04:02.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3830" for this suite.
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:03.231: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 37 lines ...
      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:03:58.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-ef27e0c4-20f2-45cf-84df-940e5b7a156b
STEP: Creating a pod to test consume secrets
Jul 31 08:03:59.063: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a" in namespace "projected-6039" to be "Succeeded or Failed"
Jul 31 08:03:59.167: INFO: Pod "pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a": Phase="Pending", Reason="", readiness=false. Elapsed: 103.029795ms
Jul 31 08:04:01.272: INFO: Pod "pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208301339s
Jul 31 08:04:03.392: INFO: Pod "pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328895397s
STEP: Saw pod success
Jul 31 08:04:03.393: INFO: Pod "pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a" satisfied condition "Succeeded or Failed"
Jul 31 08:04:03.499: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:04:03.862: INFO: Waiting for pod pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a to disappear
Jul 31 08:04:03.986: INFO: Pod pod-projected-secrets-5cefc5b8-338e-4a7f-a347-ca851255e56a no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.878 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:04.222: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 384 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should drop INVALID conntrack entries
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:282
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Conntrack should drop INVALID conntrack entries","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:04.258: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 31 08:04:00.377: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4d2e875c-56be-4d0d-984d-a2e199dd8a9c" in namespace "security-context-test-9734" to be "Succeeded or Failed"
Jul 31 08:04:00.481: INFO: Pod "busybox-user-65534-4d2e875c-56be-4d0d-984d-a2e199dd8a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 103.720702ms
Jul 31 08:04:02.584: INFO: Pod "busybox-user-65534-4d2e875c-56be-4d0d-984d-a2e199dd8a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206196571s
Jul 31 08:04:04.687: INFO: Pod "busybox-user-65534-4d2e875c-56be-4d0d-984d-a2e199dd8a9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30926578s
Jul 31 08:04:06.789: INFO: Pod "busybox-user-65534-4d2e875c-56be-4d0d-984d-a2e199dd8a9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.411420271s
Jul 31 08:04:06.789: INFO: Pod "busybox-user-65534-4d2e875c-56be-4d0d-984d-a2e199dd8a9c" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:04:06.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9734" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:07.018: INFO: Only supported for providers [gce gke] (not aws)
... skipping 80 lines ...
Jul 31 08:03:02.939: INFO: PersistentVolumeClaim csi-hostpathmpcs4 found but phase is Pending instead of Bound.
Jul 31 08:03:05.044: INFO: PersistentVolumeClaim csi-hostpathmpcs4 found but phase is Pending instead of Bound.
Jul 31 08:03:07.146: INFO: PersistentVolumeClaim csi-hostpathmpcs4 found but phase is Pending instead of Bound.
Jul 31 08:03:09.247: INFO: PersistentVolumeClaim csi-hostpathmpcs4 found and phase=Bound (14.817058727s)
STEP: Expanding non-expandable pvc
Jul 31 08:03:09.453: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul 31 08:03:09.657: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:11.885: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:13.862: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:15.861: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:17.874: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:19.861: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:21.870: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:23.885: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:25.870: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:27.864: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:29.868: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:31.864: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:33.865: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:35.867: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:37.864: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:39.867: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:03:40.091: INFO: Error updating pvc csi-hostpathmpcs4: persistentvolumeclaims "csi-hostpathmpcs4" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul 31 08:03:40.091: INFO: Deleting PersistentVolumeClaim "csi-hostpathmpcs4"
Jul 31 08:03:40.216: INFO: Waiting up to 5m0s for PersistentVolume pvc-6cc2825c-e683-4da3-b8ee-0fc0d585b634 to get deleted
Jul 31 08:03:40.324: INFO: PersistentVolume pvc-6cc2825c-e683-4da3-b8ee-0fc0d585b634 found and phase=Released (107.395537ms)
Jul 31 08:03:45.426: INFO: PersistentVolume pvc-6cc2825c-e683-4da3-b8ee-0fc0d585b634 was removed
STEP: Deleting sc
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:09.277: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:03:46.553: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Jul 31 08:04:00.917: INFO: PersistentVolumeClaim pvc-2gsp8 found but phase is Pending instead of Bound.
Jul 31 08:04:03.019: INFO: PersistentVolumeClaim pvc-2gsp8 found and phase=Bound (8.536247279s)
Jul 31 08:04:03.019: INFO: Waiting up to 3m0s for PersistentVolume local-2m7h4 to have phase Bound
Jul 31 08:04:03.120: INFO: PersistentVolume local-2m7h4 found and phase=Bound (101.309488ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-tlx4
STEP: Creating a pod to test subpath
Jul 31 08:04:03.434: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-tlx4" in namespace "provisioning-1790" to be "Succeeded or Failed"
Jul 31 08:04:03.554: INFO: Pod "pod-subpath-test-preprovisionedpv-tlx4": Phase="Pending", Reason="", readiness=false. Elapsed: 119.569117ms
Jul 31 08:04:05.666: INFO: Pod "pod-subpath-test-preprovisionedpv-tlx4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23211521s
Jul 31 08:04:07.807: INFO: Pod "pod-subpath-test-preprovisionedpv-tlx4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.372680398s
STEP: Saw pod success
Jul 31 08:04:07.807: INFO: Pod "pod-subpath-test-preprovisionedpv-tlx4" satisfied condition "Succeeded or Failed"
Jul 31 08:04:07.993: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-tlx4 container test-container-volume-preprovisionedpv-tlx4: <nil>
STEP: delete the pod
Jul 31 08:04:08.287: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-tlx4 to disappear
Jul 31 08:04:08.407: INFO: Pod pod-subpath-test-preprovisionedpv-tlx4 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-tlx4
Jul 31 08:04:08.407: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-tlx4" in namespace "provisioning-1790"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:09.998: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
• [SLOW TEST:8.194 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:10.472: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
STEP: Destroying namespace "apply-6427" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:11.510: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 40 lines ...
      Driver local doesn't support ext4 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:07.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 17 lines ...
STEP: Destroying namespace "services-4367" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:04:13.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8948" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":6,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:13.535: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 63 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-8e9dbab0-e6cd-4b35-842d-e6ba745349b7
STEP: Creating a pod to test consume configMaps
Jul 31 08:04:14.283: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1db2a2b9-b5a1-4dcc-84ab-fc52b7ba23df" in namespace "projected-6041" to be "Succeeded or Failed"
Jul 31 08:04:14.386: INFO: Pod "pod-projected-configmaps-1db2a2b9-b5a1-4dcc-84ab-fc52b7ba23df": Phase="Pending", Reason="", readiness=false. Elapsed: 103.253255ms
Jul 31 08:04:16.491: INFO: Pod "pod-projected-configmaps-1db2a2b9-b5a1-4dcc-84ab-fc52b7ba23df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208502313s
STEP: Saw pod success
Jul 31 08:04:16.491: INFO: Pod "pod-projected-configmaps-1db2a2b9-b5a1-4dcc-84ab-fc52b7ba23df" satisfied condition "Succeeded or Failed"
Jul 31 08:04:16.595: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-projected-configmaps-1db2a2b9-b5a1-4dcc-84ab-fc52b7ba23df container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:04:16.827: INFO: Waiting for pod pod-projected-configmaps-1db2a2b9-b5a1-4dcc-84ab-fc52b7ba23df to disappear
Jul 31 08:04:16.946: INFO: Pod pod-projected-configmaps-1db2a2b9-b5a1-4dcc-84ab-fc52b7ba23df no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:04:16.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6041" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:17.205: INFO: Only supported for providers [gce gke] (not aws)
... skipping 24 lines ...
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: set up a multi version CRD
Jul 31 08:04:04.808: INFO: >>> kubeConfig: /root/.kube/config
Jul 31 08:04:17.816: FAIL: failed to wait for definition "com.example.crd-publish-openapi-test-multi-to-single-ver.v6alpha1.E2e-test-crd-publish-openapi-328-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/openapi/v2?timeout=32s": dial tcp 35.177.99.229:443: connect: connection refused; lastMsg: 

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00360d680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00360d680)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00360d680, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "crd-publish-openapi-4612".
Jul 31 08:04:17.926: FAIL: failed to list events in namespace "crd-publish-openapi-4612"
Unexpected error:
    <*url.Error | 0xc004130720>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/crd-publish-openapi-4612/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00360d680, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "crd-publish-openapi-4612" for this suite.
Jul 31 08:04:18.071: FAIL: Couldn't delete ns: "crd-publish-openapi-4612": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/crd-publish-openapi-4612": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/crd-publish-openapi-4612", Err:(*net.OpError)(0xc002bf2f50)})

Full Stack Trace
panic(0x6a4afe0, 0xc005038880)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc004ed82c0, 0x29c, 0x869f4fa, 0x67, 0x36f, 0xc0048f0a00, 0x4d2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc004ed82c0, 0x29c, 0xc00229c5b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc004ed8000, 0x287, 0xc003cf8280, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc00229c748, 0x76cfa48, 0x9e10598, 0x0, 0xc00229c8b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc00229c748, 0x76cfa48, 0x9e10598, 0xc00229c8b0, 0x2, 0x2, 0xc000100000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc004130720, 0xc00229c8b0, 0x2, 0x2)
... skipping 21 lines ...
• Failure [13.805 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 31 08:04:17.817: failed to wait for definition "com.example.crd-publish-openapi-test-multi-to-single-ver.v6alpha1.E2e-test-crd-publish-openapi-328-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/openapi/v2?timeout=32s": dial tcp 35.177.99.229:443: connect: connection refused; lastMsg: 

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":1,"skipped":7,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:18.097: INFO: Only supported for providers [vsphere] (not aws)
... skipping 26 lines ...
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod busybox-127cbf57-d777-411f-889e-286077ea9e68 in namespace container-probe-1511
Jul 31 08:03:56.618: INFO: Started pod busybox-127cbf57-d777-411f-889e-286077ea9e68 in namespace container-probe-1511
STEP: checking the pod's current state and verifying that restartCount is present
Jul 31 08:03:56.720: INFO: Initial restart count of pod busybox-127cbf57-d777-411f-889e-286077ea9e68 is 0
Jul 31 08:04:17.905: FAIL: getting pod 
Unexpected error:
    <*url.Error | 0xc00348b230>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-1511/pods/busybox-127cbf57-d777-411f-889e-286077ea9e68",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 18 lines ...
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: deleting the pod
[AfterEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "container-probe-1511".
Jul 31 08:04:18.014: FAIL: failed to list events in namespace "container-probe-1511"
Unexpected error:
    <*url.Error | 0xc0034091a0>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-1511/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0039a4180, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "container-probe-1511" for this suite.
Jul 31 08:04:18.132: FAIL: Couldn't delete ns: "container-probe-1511": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-1511": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-1511", Err:(*net.OpError)(0xc002acb8b0)})

Full Stack Trace
panic(0x6a4afe0, 0xc00396a180)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00398b600, 0x290, 0x869f4fa, 0x67, 0x36f, 0xc003452a00, 0x4d2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00398b600, 0x290, 0xc003dde5b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc002e65180, 0x27b, 0xc00345c060, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc003dde748, 0x76cfa48, 0x9e10598, 0x0, 0xc003dde8b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc003dde748, 0x76cfa48, 0x9e10598, 0xc003dde8b0, 0x2, 0x2, 0xc00058a000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc0034091a0, 0xc003dde8b0, 0x2, 0x2)
... skipping 22 lines ...
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 31 08:04:17.905: getting pod 
  Unexpected error:
      <*url.Error | 0xc00348b230>: {
          Op: "Get",
          URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-1511/pods/busybox-127cbf57-d777-411f-889e-286077ea9e68",
          Err: {
              Op: "dial",
              Net: "tcp",
              Source: nil,
... skipping 3 lines ...
      }
      Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-1511/pods/busybox-127cbf57-d777-411f-889e-286077ea9e68": dial tcp 35.177.99.229:443: connect: connection refused
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:692
------------------------------
{"msg":"FAILED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":42,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:17.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul 31 08:04:17.975: FAIL: Unexpected error:
    <*errors.errorString | 0xc0036c6650>: {
        s: "listing schedulable nodes error: error: Get \"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse\": dial tcp 35.177.99.229:443: connect: connection refused. Non-retryable failure or timed out while listing nodes for e2e cluster",
    }
    listing schedulable nodes error: error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse": dial tcp 35.177.99.229:443: connect: connection refused. Non-retryable failure or timed out while listing nodes for e2e cluster
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).PrepareTest(0xc00212e800, 0xc00270c2c0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1893 +0xa5
k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func1()
... skipping 8 lines ...
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-939".
Jul 31 08:04:18.085: FAIL: failed to list events in namespace "provisioning-939"
Unexpected error:
    <*url.Error | 0xc00350f260>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-939/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc002671980, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "provisioning-939" for this suite.
Jul 31 08:04:18.197: FAIL: Couldn't delete ns: "provisioning-939": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-939": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-939", Err:(*net.OpError)(0xc0039eb0e0)})

Full Stack Trace
panic(0x6a4afe0, 0xc0029a1b40)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc001c16000, 0x284, 0x869f4fa, 0x67, 0x36f, 0xc004175400, 0x4d2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc001c16000, 0x284, 0xc0036925b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc00437a000, 0x26f, 0xc00410def0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc003692748, 0x76cfa48, 0x9e10598, 0x0, 0xc0036928b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc003692748, 0x76cfa48, 0x9e10598, 0xc0036928b0, 0x2, 0x2, 0xc00007d400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc00350f260, 0xc0036928b0, 0x2, 0x2)
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Jul 31 08:04:17.975: Unexpected error:
          <*errors.errorString | 0xc0036c6650>: {
              s: "listing schedulable nodes error: error: Get \"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse\": dial tcp 35.177.99.229:443: connect: connection refused. Non-retryable failure or timed out while listing nodes for e2e cluster",
          }
          listing schedulable nodes error: error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse": dial tcp 35.177.99.229:443: connect: connection refused. Non-retryable failure or timed out while listing nodes for e2e cluster
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1893
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":41,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:18.208: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
Jul 31 08:04:14.641: INFO: stderr: ""
Jul 31 08:04:14.641: INFO: stdout: "e2e-test-crd-publish-openapi-9180-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 31 08:04:14.641: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7840 explain e2e-test-crd-publish-openapi-9180-crds'
Jul 31 08:04:15.216: INFO: stderr: ""
Jul 31 08:04:15.216: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9180-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t<string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t<string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t<Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t<>\n     Specification of Waldo\n\n   status\t<Object>\n     Status of Waldo\n\n"
Jul 31 08:04:18.033: FAIL: failed to wait for definition "com.example.crd-publish-openapi-test-unknown-in-nested.v1.E2e-test-crd-publish-openapi-9180-crd" not to be served anymore: failed to wait for OpenAPI spec validating condition: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/openapi/v2?timeout=32s": dial tcp 35.177.99.229:443: connect: connection refused; lastMsg: spec.SwaggerProps.Definitions["com.example.crd-publish-openapi-test-unknown-in-nested.v1.E2e-test-crd-publish-openapi-9180-crd"] still exists

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000f40f00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc000f40f00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000f40f00, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "crd-publish-openapi-7840".
Jul 31 08:04:18.146: FAIL: failed to list events in namespace "crd-publish-openapi-7840"
Unexpected error:
    <*url.Error | 0xc002f59a70>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/crd-publish-openapi-7840/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000f40f00, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "crd-publish-openapi-7840" for this suite.
Jul 31 08:04:18.258: FAIL: Couldn't delete ns: "crd-publish-openapi-7840": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/crd-publish-openapi-7840": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/crd-publish-openapi-7840", Err:(*net.OpError)(0xc0006c22d0)})

Full Stack Trace
panic(0x6a4afe0, 0xc0064dbac0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc004854dc0, 0x29c, 0x869f4fa, 0x67, 0x36f, 0xc00652b900, 0x4d2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc004854dc0, 0x29c, 0xc00287c5b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc004854b00, 0x287, 0xc002b65590, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc00287c748, 0x76cfa48, 0x9e10598, 0x0, 0xc00287c8b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc00287c748, 0x76cfa48, 0x9e10598, 0xc00287c8b0, 0x2, 0x2, 0xc0024f3c00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc002f59a70, 0xc00287c8b0, 0x2, 0x2)
... skipping 21 lines ...
• Failure [15.447 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 31 08:04:18.033: failed to wait for definition "com.example.crd-publish-openapi-test-unknown-in-nested.v1.E2e-test-crd-publish-openapi-9180-crd" not to be served anymore: failed to wait for OpenAPI spec validating condition: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/openapi/v2?timeout=32s": dial tcp 35.177.99.229:443: connect: connection refused; lastMsg: spec.SwaggerProps.Definitions["com.example.crd-publish-openapi-test-unknown-in-nested.v1.E2e-test-crd-publish-openapi-9180-crd"] still exists

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:18.265: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":2,"skipped":48,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
Jul 31 08:03:21.733: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-qs6tt] to have phase Bound
Jul 31 08:03:21.834: INFO: PersistentVolumeClaim pvc-qs6tt found and phase=Bound (100.721864ms)
STEP: Deleting the previously created pod
Jul 31 08:03:42.340: INFO: Deleting pod "pvc-volume-tester-g99t9" in namespace "csi-mock-volumes-8222"
Jul 31 08:03:42.442: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g99t9" to be fully deleted
STEP: Checking CSI driver logs
Jul 31 08:03:46.755: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/282a2157-7fe4-4a26-8d0e-2660a77b5531/volumes/kubernetes.io~csi/pvc-0477c1bb-6afd-47a8-b1c8-082d4e667a49/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-g99t9
Jul 31 08:03:46.755: INFO: Deleting pod "pvc-volume-tester-g99t9" in namespace "csi-mock-volumes-8222"
STEP: Deleting claim pvc-qs6tt
Jul 31 08:03:47.066: INFO: Waiting up to 2m0s for PersistentVolume pvc-0477c1bb-6afd-47a8-b1c8-082d4e667a49 to get deleted
Jul 31 08:03:47.169: INFO: PersistentVolume pvc-0477c1bb-6afd-47a8-b1c8-082d4e667a49 found and phase=Released (102.570603ms)
Jul 31 08:03:49.271: INFO: PersistentVolume pvc-0477c1bb-6afd-47a8-b1c8-082d4e667a49 found and phase=Released (2.204416375s)
... skipping 35 lines ...
Jul 31 08:04:05.109: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-8222
Jul 31 08:04:05.267: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8222-4979/csi-mockplugin
Jul 31 08:04:05.372: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-8222
Jul 31 08:04:05.504: INFO: deleting *v1.StatefulSet: csi-mock-volumes-8222-4979/csi-mockplugin-attacher
STEP: deleting the driver namespace: csi-mock-volumes-8222-4979
STEP: Waiting for namespaces [csi-mock-volumes-8222-4979] to vanish
Jul 31 08:04:17.850: INFO: error deleting namespace csi-mock-volumes-8222-4979: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
[AfterEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:04:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
Jul 31 08:04:18.073: FAIL: All nodes should be ready after test, Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.99.229:443: connect: connection refused

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0039aa480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0039aa480)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0039aa480, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "csi-mock-volumes-8222-4979" for this suite.
STEP: Collecting events from namespace "csi-mock-volumes-8222-4979".
Jul 31 08:04:18.290: FAIL: failed to list events in namespace "csi-mock-volumes-8222-4979"
Unexpected error:
    <*url.Error | 0xc003393d40>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-8222-4979/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 10 lines ...
k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo(0x7777c78, 0xc0020189a0, 0xc0008a0ae0, 0x1a)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:897 +0xa5
k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1(0xc00106e000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:398 +0x3c6
panic(0x6a4afe0, 0xc003860400)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc001f55500, 0xbc, 0x8652232, 0x87, 0x71, 0xc0023e6e00, 0x1a1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc001f55500, 0xbc, 0xc00263ec80, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x6f65f0a, 0x28, 0xc00263edc8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00106e000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:479 +0x4e5
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0039aa480)
... skipping 43 lines ...
Jul 31 08:04:10.781: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 31 08:04:10.984: INFO: The status of Pod netserver-1 is Running (Ready = true)
Jul 31 08:04:11.211: INFO: The status of Pod netserver-2 is Running (Ready = false)
Jul 31 08:04:13.313: INFO: The status of Pod netserver-2 is Running (Ready = true)
Jul 31 08:04:13.516: INFO: The status of Pod netserver-3 is Running (Ready = true)
STEP: Creating test pods
Jul 31 08:04:18.071: FAIL: Unexpected error:
    <*url.Error | 0xc0036951a0>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-596/pods/test-container-pod",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 21 lines ...
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "pod-network-test-596".
Jul 31 08:04:18.187: FAIL: failed to list events in namespace "pod-network-test-596"
Unexpected error:
    <*url.Error | 0xc0037b7440>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-596/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc000ba7500, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "pod-network-test-596" for this suite.
Jul 31 08:04:18.296: FAIL: Couldn't delete ns: "pod-network-test-596": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-596": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-596", Err:(*net.OpError)(0xc00394a780)})

Full Stack Trace
panic(0x6a4afe0, 0xc003b74d00)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc001c302c0, 0x290, 0x869f4fa, 0x67, 0x36f, 0xc003f77400, 0x4d2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc001c302c0, 0x290, 0xc0006705b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc003ff8780, 0x27b, 0xc003979b90, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc000670748, 0x76cfa48, 0x9e10598, 0x0, 0xc0006708b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc000670748, 0x76cfa48, 0x9e10598, 0xc0006708b0, 0x2, 0x2, 0xc0003f5000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc0037b7440, 0xc0006708b0, 0x2, 0x2)
... skipping 23 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul 31 08:04:18.071: Unexpected error:
        <*url.Error | 0xc0036951a0>: {
            Op: "Get",
            URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-596/pods/test-container-pod",
            Err: {
                Op: "dial",
                Net: "tcp",
                Source: nil,
... skipping 3 lines ...
        }
        Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/pod-network-test-596/pods/test-container-pod": dial tcp 35.177.99.229:443: connect: connection refused
    occurred

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:724
------------------------------
{"msg":"FAILED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":0,"skipped":15,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}

S
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":54,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:18.314: INFO: Only supported for providers [vsphere] (not aws)
... skipping 63 lines ...
Jul 31 08:02:53.768: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4913
Jul 31 08:02:53.872: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4913
Jul 31 08:02:53.976: INFO: creating *v1.StatefulSet: csi-mock-volumes-4913-1757/csi-mockplugin
Jul 31 08:02:54.082: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-4913
Jul 31 08:02:54.188: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-4913"
Jul 31 08:02:54.290: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-4913 to register on node ip-172-20-58-77.eu-west-2.compute.internal
I0731 08:03:15.075828    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0731 08:03:15.180926    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4913","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0731 08:03:15.283234    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I0731 08:03:15.385833    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0731 08:03:15.613028    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-4913","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0731 08:03:16.155692    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-4913","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Jul 31 08:03:21.485: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0731 08:03:21.742975    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0731 08:03:24.574903    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I0731 08:03:26.108003    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 31 08:03:26.209: INFO: >>> kubeConfig: /root/.kube/config
I0731 08:03:26.972707    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2","storage.kubernetes.io/csiProvisionerIdentity":"1627718595476-8081-csi-mock-csi-mock-volumes-4913"}},"Response":{},"Error":"","FullError":null}
I0731 08:03:27.077156    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 31 08:03:27.181: INFO: >>> kubeConfig: /root/.kube/config
Jul 31 08:03:27.899: INFO: >>> kubeConfig: /root/.kube/config
Jul 31 08:03:28.651: INFO: >>> kubeConfig: /root/.kube/config
I0731 08:03:29.419514    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2/globalmount","target_path":"/var/lib/kubelet/pods/568f7402-4571-4a43-8d83-3cec3463b5af/volumes/kubernetes.io~csi/pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2","storage.kubernetes.io/csiProvisionerIdentity":"1627718595476-8081-csi-mock-csi-mock-volumes-4913"}},"Response":{},"Error":"","FullError":null}
Jul 31 08:03:31.906: INFO: Deleting pod "pvc-volume-tester-5dk6s" in namespace "csi-mock-volumes-4913"
Jul 31 08:03:32.010: INFO: Wait up to 5m0s for pod "pvc-volume-tester-5dk6s" to be fully deleted
Jul 31 08:03:34.484: INFO: >>> kubeConfig: /root/.kube/config
I0731 08:03:35.325296    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/568f7402-4571-4a43-8d83-3cec3463b5af/volumes/kubernetes.io~csi/pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2/mount"},"Response":{},"Error":"","FullError":null}
I0731 08:03:35.485026    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0731 08:03:35.589422    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a178c935-9a9c-4ecf-9e0d-3fdbe75342e2/globalmount"},"Response":{},"Error":"","FullError":null}
I0731 08:03:40.409492    4717 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul 31 08:03:41.337: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mnntp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4913", SelfLink:"", UID:"a178c935-9a9c-4ecf-9e0d-3fdbe75342e2", ResourceVersion:"2820", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315401, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003a88d50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003a88d68)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00374cb70), VolumeMode:(*v1.PersistentVolumeMode)(0xc00374cb80), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:03:41.338: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mnntp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4913", SelfLink:"", UID:"a178c935-9a9c-4ecf-9e0d-3fdbe75342e2", ResourceVersion:"2825", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315401, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-58-77.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003801830), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003801848)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003801860), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003801878)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002c63030), VolumeMode:(*v1.PersistentVolumeMode)(0xc002c63040), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:03:41.338: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mnntp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4913", SelfLink:"", UID:"a178c935-9a9c-4ecf-9e0d-3fdbe75342e2", ResourceVersion:"2830", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315401, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4913", "volume.kubernetes.io/selected-node":"ip-172-20-58-77.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00307ff68), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00307ff80)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00307ff98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00307ffb0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00307ffc8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00307ffe0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002db9850), VolumeMode:(*v1.PersistentVolumeMode)(0xc002db9860), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:03:41.338: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mnntp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4913", SelfLink:"", UID:"a178c935-9a9c-4ecf-9e0d-3fdbe75342e2", ResourceVersion:"2836", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315401, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4913"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031b4000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031b4018)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031b4030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031b4048)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031b4060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031b4078)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002db9890), VolumeMode:(*v1.PersistentVolumeMode)(0xc002db98a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:03:41.338: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-mnntp", GenerateName:"pvc-", Namespace:"csi-mock-volumes-4913", SelfLink:"", UID:"a178c935-9a9c-4ecf-9e0d-3fdbe75342e2", ResourceVersion:"2891", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315401, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-4913", "volume.kubernetes.io/selected-node":"ip-172-20-58-77.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003160588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031605a0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031605b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0031605d0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0031605e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003160600)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003146b60), VolumeMode:(*v1.PersistentVolumeMode)(0xc003146b70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 38 lines ...
Jul 31 08:03:51.465: INFO: deleting *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-4913
Jul 31 08:03:51.573: INFO: deleting *v1.StorageClass: csi-mock-sc-csi-mock-volumes-4913
Jul 31 08:03:51.678: INFO: deleting *v1.StatefulSet: csi-mock-volumes-4913-1757/csi-mockplugin
Jul 31 08:03:51.782: INFO: deleting *v1.CSIDriver: csi-mock-csi-mock-volumes-4913
STEP: deleting the driver namespace: csi-mock-volumes-4913-1757
STEP: Waiting for namespaces [csi-mock-volumes-4913-1757] to vanish
Jul 31 08:04:18.100: INFO: error deleting namespace csi-mock-volumes-4913-1757: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
[AfterEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:04:18.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
Jul 31 08:04:18.311: FAIL: All nodes should be ready after test, Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.99.229:443: connect: connection refused

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00137f200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc00137f200)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00137f200, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "csi-mock-volumes-4913-1757" for this suite.
STEP: Collecting events from namespace "csi-mock-volumes-4913-1757".
Jul 31 08:04:18.527: FAIL: failed to list events in namespace "csi-mock-volumes-4913-1757"
Unexpected error:
    <*url.Error | 0xc0031bb6b0>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/csi-mock-volumes-4913-1757/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 10 lines ...
k8s.io/kubernetes/test/e2e/framework.DumpAllNamespaceInfo(0x7777c78, 0xc002b21600, 0xc000b706c0, 0x1a)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:897 +0xa5
k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach.func1(0xc00122c580)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:398 +0x3c6
panic(0x6a4afe0, 0xc00051f800)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00389acc0, 0xbc, 0x8652232, 0x87, 0x71, 0xc001066380, 0x1a1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00389acc0, 0xbc, 0xc0012fec80, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x6f65f0a, 0x28, 0xc0012fedc8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc00122c580)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:479 +0x4e5
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00137f200)
... skipping 15 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958

    Jul 31 08:04:18.311: All nodes should be ready after test, Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/nodes": dial tcp 35.177.99.229:443: connect: connection refused

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":0,"skipped":0,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158
[BeforeEach] [Volume type: blockfswithformat]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195
STEP: Initializing test volumes
STEP: Creating block device on node "ip-172-20-58-77.eu-west-2.compute.internal" using path "/tmp/local-volume-test-774abb71-3681-419d-8b4e-802830b84d7e"
Jul 31 08:04:18.397: FAIL: Unexpected error:
    <*url.Error | 0xc003103a70>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-8808/pods/hostexec-ip-172-20-58-77.eu-west-2.compute.internal-xmngn",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 38 lines ...
[AfterEach] [Volume type: blockfswithformat]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204
STEP: Cleaning up PVC and PV
[AfterEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "persistent-local-volumes-test-8808".
Jul 31 08:04:18.507: FAIL: failed to list events in namespace "persistent-local-volumes-test-8808"
Unexpected error:
    <*url.Error | 0xc002e51dd0>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-8808/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0031c9500, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "persistent-local-volumes-test-8808" for this suite.
Jul 31 08:04:18.615: FAIL: Couldn't delete ns: "persistent-local-volumes-test-8808": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-8808": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-8808", Err:(*net.OpError)(0xc00319a140)})

Full Stack Trace
panic(0x6a4afe0, 0xc002e668c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0033c42c0, 0x2ba, 0x869f4fa, 0x67, 0x36f, 0xc0033a0f00, 0x4d2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0033c42c0, 0x2ba, 0xc001dce5b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc0033c4000, 0x2a5, 0xc00339c5d0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc001dce748, 0x76cfa48, 0x9e10598, 0x0, 0xc001dce8b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc001dce748, 0x76cfa48, 0x9e10598, 0xc001dce8b0, 0x2, 0x2, 0xc0002f8800)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc002e51dd0, 0xc001dce8b0, 0x2, 0x2)
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC [BeforeEach]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232

      Jul 31 08:04:18.398: Unexpected error:
          <*url.Error | 0xc003103a70>: {
              Op: "Get",
              URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-8808/pods/hostexec-ip-172-20-58-77.eu-west-2.compute.internal-xmngn",
              Err: {
                  Op: "dial",
                  Net: "tcp",
                  Source: nil,
... skipping 3 lines ...
          }
          Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/persistent-local-volumes-test-8808/pods/hostexec-ip-172-20-58-77.eu-west-2.compute.internal-xmngn": dial tcp 35.177.99.229:443: connect: connection refused
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":29,"failed":1,"failures":["[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:18.626: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
Jul 31 08:04:14.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315449, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315449, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315449, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315449, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 31 08:04:17.229: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 31 08:04:17.867: FAIL: Creating validating webhook configuration
Unexpected error:
    <*url.Error | 0xc003568a80>: {
        Op: "Post",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations",
        Err: {
            Op: "read",
            Net: "tcp",
            Source: {
... skipping 19 lines ...
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "webhook-4131".
Jul 31 08:04:17.980: FAIL: failed to list events in namespace "webhook-4131"
Unexpected error:
    <*url.Error | 0xc0037d24b0>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/webhook-4131/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 20 lines ...
testing.tRunner(0xc0025da900, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "webhook-4131" for this suite.
STEP: Destroying namespace "webhook-4131-markers" for this suite.
Jul 31 08:04:18.194: FAIL: Couldn't delete ns: "webhook-4131": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/webhook-4131": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/webhook-4131", Err:(*net.OpError)(0xc0028e1900)}),Couldn't delete ns: "webhook-4131-markers": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/webhook-4131-markers": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/webhook-4131-markers", Err:(*net.OpError)(0xc002b0f180)})

Full Stack Trace
panic(0x6a4afe0, 0xc000a5ef40)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00395ca00, 0x278, 0x869f4fa, 0x67, 0x36f, 0xc003a3aa00, 0x4d0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00395ca00, 0x278, 0xc003d265b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc00395c280, 0x263, 0xc0019de8a0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc003d26748, 0x76cfa48, 0x9e10598, 0x0, 0xc003d268b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc003d26748, 0x76cfa48, 0x9e10598, 0xc003d268b0, 0x2, 0x2, 0xc000780000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc0037d24b0, 0xc003d268b0, 0x2, 0x2)
... skipping 24 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 31 08:04:17.867: Creating validating webhook configuration
  Unexpected error:
      <*url.Error | 0xc003568a80>: {
          Op: "Post",
          URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations",
          Err: {
              Op: "read",
              Net: "tcp",
              Source: {
... skipping 9 lines ...
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:595
------------------------------
S
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":4,"skipped":34,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:18.652: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 100 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:03:33.868: INFO: >>> kubeConfig: /root/.kube/config
... skipping 26 lines ...
Jul 31 08:03:58.387: INFO: Wait up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5g6c" to be fully deleted
STEP: Deleting pod
Jul 31 08:04:18.599: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5g6c" in namespace "provisioning-548"
STEP: Deleting pv and pvc
Jul 31 08:04:18.706: INFO: Deleting PersistentVolumeClaim "pvc-dzl4h"
Jul 31 08:04:18.813: INFO: Deleting PersistentVolume "local-bkwv6"
Jul 31 08:04:18.922: FAIL: Failed to delete PVC or PV: [failed to delete PVC "pvc-dzl4h": PVC Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-548/persistentvolumeclaims/pvc-dzl4h": dial tcp 35.177.99.229:443: connect: connection refused, failed to delete PV "local-bkwv6": PV Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-bkwv6": dial tcp 35.177.99.229:443: connect: connection refused]

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177 +0x248
k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func20()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:472 +0x5af
... skipping 5 lines ...
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-548".
Jul 31 08:04:19.031: FAIL: failed to list events in namespace "provisioning-548"
Unexpected error:
    <*url.Error | 0xc003219ce0>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-548/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc003106600, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "provisioning-548" for this suite.
Jul 31 08:04:19.148: FAIL: Couldn't delete ns: "provisioning-548": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-548": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-548", Err:(*net.OpError)(0xc0014a3810)})

Full Stack Trace
panic(0x6a4afe0, 0xc002b03b40)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0005df080, 0x284, 0x869f4fa, 0x67, 0x36f, 0xc001dc7900, 0x4d2)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0005df080, 0x284, 0xc0022085b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc0020e4000, 0x26f, 0xc0024600a0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc002208748, 0x76cfa48, 0x9e10598, 0x0, 0xc0022088b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc002208748, 0x76cfa48, 0x9e10598, 0xc0022088b0, 0x2, 0x2, 0xc0029c8400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc003219ce0, 0xc0022088b0, 0x2, 0x2)
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444

      Jul 31 08:04:18.922: Failed to delete PVC or PV: [failed to delete PVC "pvc-dzl4h": PVC Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/provisioning-548/persistentvolumeclaims/pvc-dzl4h": dial tcp 35.177.99.229:443: connect: connection refused, failed to delete PV "local-bkwv6": PV Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-bkwv6": dial tcp 35.177.99.229:443: connect: connection refused]

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:177
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:19.178: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 185 lines ...
Jul 31 08:04:17.398: INFO: Pod aws-client still exists
Jul 31 08:04:19.295: INFO: Waiting for pod aws-client to disappear
STEP: cleaning the environment after aws
STEP: Deleting pvc
Jul 31 08:04:19.513: INFO: Deleting PersistentVolumeClaim "aws8l5x8"
STEP: Deleting sc
Jul 31 08:04:35.144: FAIL: while cleaning up resource
Unexpected error:
    <errors.aggregate | len:1, cap:1>: [
        [
            {
                error: {
                    cause: {
                        Op: "Get",
                        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8",
                        Err: {
                            Op: "dial",
                            Net: "tcp",
                            Source: nil,
                            Addr: {IP: "#\xb1c\xe5", Port: 443, Zone: ""},
                            Err: {Syscall: "connect", Err: 0x6f},
                        },
                    },
                    msg: "Failed to find PVC aws8l5x8",
                },
                stack: [0x5c54c46, 0x5d13e85, 0x5d14065, 0x5d146e5, 0x248ae03, 0x248aa1c, 0x2489d47, 0x249104f, 0x24906f2, 0x2496991, 0x24964a7, 0x2495c97, 0x2498326, 0x249adf8, 0x249ab4d, 0x5dbabec, 0x5dbf34b, 0x21ab82f, 0x20ef1a1],
            },
            {
                error: {
                    cause: {
                        s: "PVC Delete API error: Delete \"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8\": dial tcp 35.177.99.229:443: connect: connection refused",
                    },
                    msg: "Failed to delete PVC aws8l5x8",
                },
                stack: [0x5c54766, 0x5d13e85, 0x5d14065, 0x5d146e5, 0x248ae03, 0x248aa1c, 0x2489d47, 0x249104f, 0x24906f2, 0x2496991, 0x24964a7, 0x2495c97, 0x2498326, 0x249adf8, 0x249ab4d, 0x5dbabec, 0x5dbf34b, 0x21ab82f, 0x20ef1a1],
            },
            {
                error: {
                    cause: {
                        Op: "Delete",
                        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-7211ch2lw",
                        Err: {
                            Op: "dial",
                            Net: "tcp",
                            Source: nil,
                            Addr: {IP: "#\xb1c\xe5", Port: 443, Zone: ""},
                            Err: {Syscall: "connect", Err: 0x6f},
                        },
                    },
                    msg: "Failed to delete StorageClass volume-7211ch2lw",
                },
                stack: [0x5c541ab, 0x5d13e85, 0x5d14065, 0x5d146e5, 0x248ae03, 0x248aa1c, 0x2489d47, 0x249104f, 0x24906f2, 0x2496991, 0x24964a7, 0x2495c97, 0x2498326, 0x249adf8, 0x249ab4d, 0x5dbabec, 0x5dbf34b, 0x21ab82f, 0x20ef1a1],
            },
        ],
    ]
    [Failed to find PVC aws8l5x8: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8": dial tcp 35.177.99.229:443: connect: connection refused, Failed to delete PVC aws8l5x8: PVC Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8": dial tcp 35.177.99.229:443: connect: connection refused, Failed to delete StorageClass volume-7211ch2lw: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-7211ch2lw": dial tcp 35.177.99.229:443: connect: connection refused]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:155 +0x154
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3.1(0xc002501760, 0xc001ecf000, 0xc000480020)
... skipping 8 lines ...
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-7211".
Jul 31 08:04:35.260: FAIL: failed to list events in namespace "volume-7211"
Unexpected error:
    <*url.Error | 0xc003962f90>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc00210ec00, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "volume-7211" for this suite.
Jul 31 08:04:35.371: FAIL: Couldn't delete ns: "volume-7211": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211", Err:(*net.OpError)(0xc0039ac550)})

Full Stack Trace
panic(0x6a4afe0, 0xc0038927c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc003b44280, 0x275, 0x869f4fa, 0x67, 0x36f, 0xc003c66a00, 0x4d0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc003b44280, 0x275, 0xc0035445b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc003b44000, 0x260, 0xc003a0c660, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc003544748, 0x76cfa48, 0x9e10598, 0x0, 0xc0035448b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc003544748, 0x76cfa48, 0x9e10598, 0xc0035448b0, 0x2, 0x2, 0xc000510400)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc003962f90, 0xc0035448b0, 0x2, 0x2)
... skipping 26 lines ...
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Jul 31 08:04:35.144: while cleaning up resource
      Unexpected error:
          <errors.aggregate | len:1, cap:1>: [
              [
                  {
                      error: {
                          cause: {
                              Op: "Get",
                              URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8",
                              Err: {
                                  Op: "dial",
                                  Net: "tcp",
                                  Source: nil,
                                  Addr: {IP: "#\xb1c\xe5", Port: 443, Zone: ""},
                                  Err: {Syscall: "connect", Err: 0x6f},
                              },
                          },
                          msg: "Failed to find PVC aws8l5x8",
                      },
                      stack: [0x5c54c46, 0x5d13e85, 0x5d14065, 0x5d146e5, 0x248ae03, 0x248aa1c, 0x2489d47, 0x249104f, 0x24906f2, 0x2496991, 0x24964a7, 0x2495c97, 0x2498326, 0x249adf8, 0x249ab4d, 0x5dbabec, 0x5dbf34b, 0x21ab82f, 0x20ef1a1],
                  },
                  {
                      error: {
                          cause: {
                              s: "PVC Delete API error: Delete \"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8\": dial tcp 35.177.99.229:443: connect: connection refused",
                          },
                          msg: "Failed to delete PVC aws8l5x8",
                      },
                      stack: [0x5c54766, 0x5d13e85, 0x5d14065, 0x5d146e5, 0x248ae03, 0x248aa1c, 0x2489d47, 0x249104f, 0x24906f2, 0x2496991, 0x24964a7, 0x2495c97, 0x2498326, 0x249adf8, 0x249ab4d, 0x5dbabec, 0x5dbf34b, 0x21ab82f, 0x20ef1a1],
                  },
                  {
                      error: {
                          cause: {
                              Op: "Delete",
                              URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-7211ch2lw",
                              Err: {
                                  Op: "dial",
                                  Net: "tcp",
                                  Source: nil,
                                  Addr: {IP: "#\xb1c\xe5", Port: 443, Zone: ""},
                                  Err: {Syscall: "connect", Err: 0x6f},
                              },
                          },
                          msg: "Failed to delete StorageClass volume-7211ch2lw",
                      },
                      stack: [0x5c541ab, 0x5d13e85, 0x5d14065, 0x5d146e5, 0x248ae03, 0x248aa1c, 0x2489d47, 0x249104f, 0x24906f2, 0x2496991, 0x24964a7, 0x2495c97, 0x2498326, 0x249adf8, 0x249ab4d, 0x5dbabec, 0x5dbf34b, 0x21ab82f, 0x20ef1a1],
                  },
              ],
          ]
          [Failed to find PVC aws8l5x8: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8": dial tcp 35.177.99.229:443: connect: connection refused, Failed to delete PVC aws8l5x8: PVC Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-7211/persistentvolumeclaims/aws8l5x8": dial tcp 35.177.99.229:443: connect: connection refused, Failed to delete StorageClass volume-7211ch2lw: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/apis/storage.k8s.io/v1/storageclasses/volume-7211ch2lw": dial tcp 35.177.99.229:443: connect: connection refused]
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:155
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","total":-1,"completed":1,"skipped":14,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:35.393: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 58 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:03:33.307: INFO: >>> kubeConfig: /root/.kube/config
... skipping 51 lines ...
Jul 31 08:04:15.448: INFO: Waiting for pod local-injector to disappear
Jul 31 08:04:15.551: INFO: Pod local-injector still exists
Jul 31 08:04:17.448: INFO: Waiting for pod local-injector to disappear
Jul 31 08:04:17.561: INFO: Pod local-injector still exists
Jul 31 08:04:19.449: INFO: Waiting for pod local-injector to disappear
STEP: starting local-client
Jul 31 08:04:35.147: FAIL: Failed to create client pod: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-5570/pods": dial tcp 35.177.99.229:443: connect: connection refused

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:188 +0x531
... skipping 6 lines ...
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: cleaning the environment after local
STEP: Deleting pv and pvc
Jul 31 08:04:35.147: INFO: Deleting PersistentVolumeClaim "pvc-t6fdr"
Jul 31 08:04:35.256: INFO: Deleting PersistentVolume "local-hdprf"
Jul 31 08:04:35.367: FAIL: Failed to delete PVC or PV: [failed to delete PVC "pvc-t6fdr": PVC Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-5570/persistentvolumeclaims/pvc-t6fdr": dial tcp 35.177.99.229:443: connect: connection refused, failed to delete PV "local-hdprf": PV Delete API error: Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/persistentvolumes/local-hdprf": dial tcp 35.177.99.229:443: connect: connection refused]

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:149 +0x205
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3.1(0xc0026518c0, 0xc001c8d8e0, 0xc000733a20)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:163 +0x105
panic(0x6a4afe0, 0xc0042ef440)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00349b1e0, 0xca, 0x86bad8e, 0x72, 0x1f9, 0xc000c14380, 0x331)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00349b1e0, 0xca, 0xc002082a80, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x6f27722, 0x1f, 0xc002082c18, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/framework/volume.testVolumeClient(0xc0026518c0, 0xc00426d6a0, 0xb, 0x6ea5881, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:521 +0x1b3
k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...)
... skipping 8 lines ...
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-5570".
Jul 31 08:04:35.477: FAIL: failed to list events in namespace "volume-5570"
Unexpected error:
    <*url.Error | 0xc0029cac00>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-5570/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0033c2f00, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "volume-5570" for this suite.
Jul 31 08:04:35.587: FAIL: Couldn't delete ns: "volume-5570": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-5570": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-5570", Err:(*net.OpError)(0xc0034579a0)})

Full Stack Trace
panic(0x6a4afe0, 0xc0042ef940)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00098fb80, 0x275, 0x869f4fa, 0x67, 0x36f, 0xc0046f3900, 0x4d0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00098fb80, 0x275, 0xc0020825b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc00098f680, 0x260, 0xc0044a3940, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc002082748, 0x76cfa48, 0x9e10598, 0x0, 0xc0020828b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc002082748, 0x76cfa48, 0x9e10598, 0xc0020828b0, 0x2, 0x2, 0xc000361000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc0029cac00, 0xc0020828b0, 0x2, 0x2)
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Jul 31 08:04:35.147: Failed to create client pod: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-5570/pods": dial tcp 35.177.99.229:443: connect: connection refused

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":3,"skipped":11,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:35.617: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
Jul 31 08:04:36.568: FAIL: Failed to list pods: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/gc-2063/pods": dial tcp 35.177.99.229:443: connect: connection refused

Full Stack Trace
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0005b9b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c
k8s.io/kubernetes/test/e2e.TestE2E(0xc0005b9b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0005b9b00, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "gc-2063".
Jul 31 08:04:36.676: FAIL: failed to list events in namespace "gc-2063"
Unexpected error:
    <*url.Error | 0xc002c25ef0>: {
        Op: "Get",
        URL: "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/gc-2063/events",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
... skipping 19 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b
testing.tRunner(0xc0005b9b00, 0x70c0ed8)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
STEP: Destroying namespace "gc-2063" for this suite.
Jul 31 08:04:36.785: FAIL: Couldn't delete ns: "gc-2063": Delete "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/gc-2063": dial tcp 35.177.99.229:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/gc-2063", Err:(*net.OpError)(0xc003a22fa0)})

Full Stack Trace
panic(0x6a4afe0, 0xc0024ab1c0)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00174e000, 0x269, 0x869f4fa, 0x67, 0x36f, 0xc001c7e000, 0x4d0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x61a6040, 0x759a860)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00174e000, 0x269, 0xc0032b65b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Fail(0xc000ca7b80, 0x254, 0xc0003b8190, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc0032b6748, 0x76cfa48, 0x9e10598, 0x0, 0xc0032b68b0, 0x2, 0x2, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:75 +0x1f3
k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc0032b6748, 0x76cfa48, 0x9e10598, 0xc0032b68b0, 0x2, 0x2, 0xc000980000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7
k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x766d300, 0xc002c25ef0, 0xc0032b68b0, 0x2, 0x2)
... skipping 21 lines ...
• Failure [56.730 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul 31 08:04:36.569: Failed to list pods: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/gc-2063/pods": dial tcp 35.177.99.229:443: connect: connection refused

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113
------------------------------
{"msg":"FAILED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":3,"skipped":32,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:36.813: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 41 lines ...
• [SLOW TEST:73.442 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:48.783: INFO: Only supported for providers [gce gke] (not aws)
... skipping 132 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on default medium should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 31 08:04:13.047: INFO: Waiting up to 5m0s for pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8" in namespace "emptydir-3047" to be "Succeeded or Failed"
Jul 31 08:04:13.149: INFO: Pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": Phase="Pending", Reason="", readiness=false. Elapsed: 101.520627ms
Jul 31 08:04:15.252: INFO: Pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204849823s
Jul 31 08:04:17.359: INFO: Pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312064256s
Jul 31 08:04:19.468: INFO: Get pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8" in namespace "emptydir-3047" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/emptydir-3047/pods/pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.821: INFO: Get pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8" in namespace "emptydir-3047" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/emptydir-3047/pods/pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.930: INFO: Get pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8" in namespace "emptydir-3047" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/emptydir-3047/pods/pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:41.040: INFO: Get pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8" in namespace "emptydir-3047" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/emptydir-3047/pods/pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:48.164: INFO: Pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.117033725s
STEP: Saw pod success
Jul 31 08:04:48.165: INFO: Pod "pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8" satisfied condition "Succeeded or Failed"
Jul 31 08:04:48.313: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8 container test-container: <nil>
STEP: delete the pod
Jul 31 08:04:48.674: INFO: Waiting for pod pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8 to disappear
Jul 31 08:04:48.802: INFO: Pod pod-c700a03e-a68f-4b92-88cf-ad5ea5b221f8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on default medium should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:71
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":5,"skipped":72,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:16.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-df9249a3-f577-4463-8230-5a7127390719
STEP: Creating a pod to test consume secrets
Jul 31 08:04:17.057: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69" in namespace "projected-1852" to be "Succeeded or Failed"
Jul 31 08:04:17.162: INFO: Pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69": Phase="Pending", Reason="", readiness=false. Elapsed: 104.663362ms
Jul 31 08:04:19.272: INFO: Get pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69" in namespace "projected-1852" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1852/pods/pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.820: INFO: Get pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69" in namespace "projected-1852" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1852/pods/pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.929: INFO: Get pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69" in namespace "projected-1852" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1852/pods/pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:41.040: INFO: Get pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69" in namespace "projected-1852" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/projected-1852/pods/pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:48.118: INFO: Pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.06135822s
STEP: Saw pod success
Jul 31 08:04:48.118: INFO: Pod "pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69" satisfied condition "Succeeded or Failed"
Jul 31 08:04:48.290: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:04:48.637: INFO: Waiting for pod pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69 to disappear
Jul 31 08:04:48.785: INFO: Pod pod-projected-secrets-b738c8ed-a256-4cd5-a7e9-bef584e3db69 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:32.877 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":3,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":72,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:04:49.172: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 190 lines ...
Jul 31 08:04:13.942: INFO: stdout: "nodeport-update-service-jp6hcnodeport-update-service-jp6hcnodeport-update-service-jp6hcnodeport-update-service-jp6hcnodeport-update-service-jp6hc"
Jul 31 08:04:13.942: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.51.93 32552'
Jul 31 08:04:17.873: INFO: stderr: "+ echo hostName\n+ nc -v -u -w 2 172.20.51.93 32552\nConnection to 172.20.51.93 32552 port [udp/*] succeeded!\n"
Jul 31 08:04:17.873: INFO: stdout: "nodeport-update-service-jp6hcnodeport-update-service-jp6hcnodeport-update-service-jp6hcnodeport-update-service-jp6hcnodeport-update-service-jp6hc"
Jul 31 08:04:17.873: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:18.080: INFO: rc: 1
Jul 31 08:04:18.080: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:19.080: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:19.270: INFO: rc: 1
Jul 31 08:04:19.270: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:20.080: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:35.544: INFO: rc: 1
Jul 31 08:04:35.544: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:36.080: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:36.269: INFO: rc: 1
Jul 31 08:04:36.270: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:37.081: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:37.269: INFO: rc: 1
Jul 31 08:04:37.269: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:38.081: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:38.284: INFO: rc: 1
Jul 31 08:04:38.284: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:39.080: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:39.269: INFO: rc: 1
Jul 31 08:04:39.270: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:40.080: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:40.273: INFO: rc: 1
Jul 31 08:04:40.273: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:41.081: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:41.271: INFO: rc: 1
Jul 31 08:04:41.271: INFO: Service reachability failing with error: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552:
Command stdout:

stderr:
The connection to the server api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io was refused - did you specify the right host or port?

error:
exit status 1
Retrying...
Jul 31 08:04:42.080: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 172.20.61.108 32552'
Jul 31 08:04:52.965: INFO: stderr: "+ echo hostName\n+ nc -v -u -w 2 172.20.61.108 32552\nConnection to 172.20.61.108 32552 port [udp/*] succeeded!\n"
Jul 31 08:04:52.965: INFO: stdout: "nodeport-update-service-ff4xznodeport-update-service-jp6hcnodeport-update-service-ff4xznodeport-update-service-ff4xznodeport-update-service-ff4xz"
Jul 31 08:04:52.965: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9864 exec execpod86bm2 -- /bin/sh -x -c echo hostName | nc -v -u -w 2 35.178.249.25 32552'
... skipping 21 lines ...
------------------------------
[BeforeEach] [sig-storage] Volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-limits-on-node
Jul 31 08:04:18.264: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:35.801: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.375: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.373: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.374: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:35
Jul 31 08:05:16.165: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
[It] should verify that all nodes have volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41
... skipping 6 lines ...
• [SLOW TEST:58.538 seconds]
[sig-storage] Volume limits
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should verify that all nodes have volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_limits.go:41
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":4,"skipped":43,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] health handlers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:28.876 seconds]
[sig-api-machinery] health handlers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should contain necessary checks
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/health_handlers.go:120
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:05:03.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:05:18.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5316" for this suite.


• [SLOW TEST:14.304 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:253
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":3,"skipped":11,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:18.315: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 37 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:05:17.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename request-timeout
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:05:18.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-7849" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":5,"skipped":28,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:18.625: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 61 lines ...
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
Jul 31 08:04:18.438: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:35.800: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.546: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.548: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.552: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-5061deef-92fc-4029-83de-179894934fd5
STEP: Creating a pod to test consume secrets
Jul 31 08:05:16.048: INFO: Waiting up to 5m0s for pod "pod-secrets-38bebb26-6682-4614-bc9f-cea0dd4dd8ef" in namespace "secrets-8289" to be "Succeeded or Failed"
Jul 31 08:05:16.150: INFO: Pod "pod-secrets-38bebb26-6682-4614-bc9f-cea0dd4dd8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 102.332972ms
Jul 31 08:05:18.252: INFO: Pod "pod-secrets-38bebb26-6682-4614-bc9f-cea0dd4dd8ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.2047243s
STEP: Saw pod success
Jul 31 08:05:18.253: INFO: Pod "pod-secrets-38bebb26-6682-4614-bc9f-cea0dd4dd8ef" satisfied condition "Succeeded or Failed"
Jul 31 08:05:18.355: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-secrets-38bebb26-6682-4614-bc9f-cea0dd4dd8ef container secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:05:18.590: INFO: Waiting for pod pod-secrets-38bebb26-6682-4614-bc9f-cea0dd4dd8ef to disappear
Jul 31 08:05:18.692: INFO: Pod pod-secrets-38bebb26-6682-4614-bc9f-cea0dd4dd8ef no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:60.671 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":58,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] NodeProblemDetector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: Destroying namespace "node-problem-detector-6088" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.724 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 24 lines ...
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:19.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
Jul 31 08:04:19.358: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.821: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:37.468: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:39.468: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:41.467: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 31 08:05:16.867: INFO: >>> kubeConfig: /root/.kube/config
... skipping 10 lines ...
• [SLOW TEST:60.771 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:20.032: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 23 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 28 lines ...
Jul 31 08:05:16.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 31 08:05:17.368: INFO: Waiting up to 5m0s for pod "downward-api-0e4a6037-6441-4b11-8740-37b99de04ece" in namespace "downward-api-3058" to be "Succeeded or Failed"
Jul 31 08:05:17.471: INFO: Pod "downward-api-0e4a6037-6441-4b11-8740-37b99de04ece": Phase="Pending", Reason="", readiness=false. Elapsed: 102.856748ms
Jul 31 08:05:19.575: INFO: Pod "downward-api-0e4a6037-6441-4b11-8740-37b99de04ece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206549109s
STEP: Saw pod success
Jul 31 08:05:19.575: INFO: Pod "downward-api-0e4a6037-6441-4b11-8740-37b99de04ece" satisfied condition "Succeeded or Failed"
Jul 31 08:05:19.678: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod downward-api-0e4a6037-6441-4b11-8740-37b99de04ece container dapi-container: <nil>
STEP: delete the pod
Jul 31 08:05:19.893: INFO: Waiting for pod downward-api-0e4a6037-6441-4b11-8740-37b99de04ece to disappear
Jul 31 08:05:19.995: INFO: Pod downward-api-0e4a6037-6441-4b11-8740-37b99de04ece no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:05:19.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3058" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":49,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:20.216: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:03:52.262: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Jul 31 08:04:02.644: INFO: PersistentVolumeClaim pvc-97rd9 found but phase is Pending instead of Bound.
Jul 31 08:04:04.749: INFO: PersistentVolumeClaim pvc-97rd9 found and phase=Bound (10.62152115s)
Jul 31 08:04:04.749: INFO: Waiting up to 3m0s for PersistentVolume aws-jtqrg to have phase Bound
Jul 31 08:04:04.874: INFO: PersistentVolume aws-jtqrg found and phase=Bound (124.418025ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-24s5
STEP: Creating a pod to test exec-volume-test
Jul 31 08:04:05.386: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-24s5" in namespace "volume-4398" to be "Succeeded or Failed"
Jul 31 08:04:05.497: INFO: Pod "exec-volume-test-preprovisionedpv-24s5": Phase="Pending", Reason="", readiness=false. Elapsed: 111.769728ms
Jul 31 08:04:07.610: INFO: Pod "exec-volume-test-preprovisionedpv-24s5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224813342s
Jul 31 08:04:09.717: INFO: Pod "exec-volume-test-preprovisionedpv-24s5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331792839s
Jul 31 08:04:11.819: INFO: Pod "exec-volume-test-preprovisionedpv-24s5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433774583s
Jul 31 08:04:13.923: INFO: Pod "exec-volume-test-preprovisionedpv-24s5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536982847s
Jul 31 08:04:16.028: INFO: Pod "exec-volume-test-preprovisionedpv-24s5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642184296s
Jul 31 08:04:18.142: INFO: Get pod "exec-volume-test-preprovisionedpv-24s5" in namespace "volume-4398" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4398/pods/exec-volume-test-preprovisionedpv-24s5": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:35.540: INFO: Get pod "exec-volume-test-preprovisionedpv-24s5" in namespace "volume-4398" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4398/pods/exec-volume-test-preprovisionedpv-24s5": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:37.648: INFO: Get pod "exec-volume-test-preprovisionedpv-24s5" in namespace "volume-4398" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4398/pods/exec-volume-test-preprovisionedpv-24s5": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:39.758: INFO: Get pod "exec-volume-test-preprovisionedpv-24s5" in namespace "volume-4398" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4398/pods/exec-volume-test-preprovisionedpv-24s5": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:47.764: INFO: Pod "exec-volume-test-preprovisionedpv-24s5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.378327899s
STEP: Saw pod success
Jul 31 08:04:47.764: INFO: Pod "exec-volume-test-preprovisionedpv-24s5" satisfied condition "Succeeded or Failed"
Jul 31 08:04:48.104: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-24s5 container exec-container-preprovisionedpv-24s5: <nil>
STEP: delete the pod
Jul 31 08:04:48.573: INFO: Waiting for pod exec-volume-test-preprovisionedpv-24s5 to disappear
Jul 31 08:04:48.678: INFO: Pod exec-volume-test-preprovisionedpv-24s5 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-24s5
Jul 31 08:04:48.679: INFO: Deleting pod "exec-volume-test-preprovisionedpv-24s5" in namespace "volume-4398"
STEP: Deleting pv and pvc
Jul 31 08:04:48.801: INFO: Deleting PersistentVolumeClaim "pvc-97rd9"
Jul 31 08:04:48.919: INFO: Deleting PersistentVolume "aws-jtqrg"
Jul 31 08:04:49.347: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01fdbb338e1e9e5f7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01fdbb338e1e9e5f7 is currently attached to i-0eccb4b5dfe1d0b8e
	status code: 400, request id: 57672f3a-8335-416c-8cd8-c42387c7d926
Jul 31 08:04:54.884: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01fdbb338e1e9e5f7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01fdbb338e1e9e5f7 is currently attached to i-0eccb4b5dfe1d0b8e
	status code: 400, request id: 1026a917-76d6-49b4-ab4e-f8050d21f720
Jul 31 08:05:00.415: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01fdbb338e1e9e5f7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01fdbb338e1e9e5f7 is currently attached to i-0eccb4b5dfe1d0b8e
	status code: 400, request id: fd027cc5-a9fb-4178-b978-1d829a31cf59
Jul 31 08:05:05.956: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01fdbb338e1e9e5f7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01fdbb338e1e9e5f7 is currently attached to i-0eccb4b5dfe1d0b8e
	status code: 400, request id: faabd337-38d0-4209-abda-ad12be56e922
Jul 31 08:05:11.550: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01fdbb338e1e9e5f7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01fdbb338e1e9e5f7 is currently attached to i-0eccb4b5dfe1d0b8e
	status code: 400, request id: acfdce4c-c6f6-4f41-8cc4-24fb200d528f
Jul 31 08:05:17.132: INFO: Couldn't delete PD "aws://eu-west-2a/vol-01fdbb338e1e9e5f7", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-01fdbb338e1e9e5f7 is currently attached to i-0eccb4b5dfe1d0b8e
	status code: 400, request id: 13dc6c36-d5d2-4faf-8a0a-27cec213ec4c
Jul 31 08:05:22.676: INFO: Successfully deleted PD "aws://eu-west-2a/vol-01fdbb338e1e9e5f7".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:05:22.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4398" for this suite.
... skipping 11 lines ...
------------------------------
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:35.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
Jul 31 08:04:35.562: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:37.670: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:39.672: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:41.669: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 31 08:05:16.510: INFO: Creating pod...
Jul 31 08:05:16.721: INFO: Pod Quantity: 1 Status: Pending
Jul 31 08:05:17.826: INFO: Pod Quantity: 1 Status: Pending
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":2,"skipped":23,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:23.077: INFO: Only supported for providers [vsphere] (not aws)
... skipping 149 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:05:23.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4482" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":84,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
Jul 31 08:04:18.215: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:35.801: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.325: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.325: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.327: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
... skipping 17 lines ...
• [SLOW TEST:75.473 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":2,"skipped":10,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:33.597: INFO: Only supported for providers [gce gke] (not aws)
... skipping 42 lines ...
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
Jul 31 08:04:18.398: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:35.800: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.506: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.504: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.507: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul 31 08:05:15.885: INFO: Waiting up to 5m0s for pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4" in namespace "security-context-4152" to be "Succeeded or Failed"
Jul 31 08:05:15.989: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 103.910445ms
Jul 31 08:05:18.092: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207396s
Jul 31 08:05:20.195: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309803926s
Jul 31 08:05:22.304: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418918297s
Jul 31 08:05:24.406: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.521715097s
Jul 31 08:05:26.509: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.624011059s
Jul 31 08:05:28.612: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.726995064s
Jul 31 08:05:30.714: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.82904332s
Jul 31 08:05:32.815: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.930623552s
Jul 31 08:05:34.918: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.032821132s
STEP: Saw pod success
Jul 31 08:05:34.918: INFO: Pod "security-context-419f62c6-563f-41d5-8907-e3a0d20733f4" satisfied condition "Succeeded or Failed"
Jul 31 08:05:35.019: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod security-context-419f62c6-563f-41d5-8907-e3a0d20733f4 container test-container: <nil>
STEP: delete the pod
Jul 31 08:05:35.947: INFO: Waiting for pod security-context-419f62c6-563f-41d5-8907-e3a0d20733f4 to disappear
Jul 31 08:05:36.049: INFO: Pod security-context-419f62c6-563f-41d5-8907-e3a0d20733f4 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:78.063 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":51,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:36.369: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 16 lines ...
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:36.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
Jul 31 08:04:36.958: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:39.069: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:41.067: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86
[It] deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 31 08:05:16.823: INFO: Pod name rollover-pod: Found 1 pods out of 1
... skipping 43 lines ...
• [SLOW TEST:60.763 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":4,"skipped":39,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:37.638: INFO: Only supported for providers [gce gke] (not aws)
... skipping 122 lines ...
• [SLOW TEST:16.166 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":7,"skipped":89,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:39.755: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 81 lines ...
Jul 31 08:04:10.403: INFO: Waiting for PV local-54j88 to bind to PVC pvc-6nclv
Jul 31 08:04:10.403: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-6nclv] to have phase Bound
Jul 31 08:04:10.505: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:12.609: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:14.714: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:16.828: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:18.936: INFO: Failed to get claim "pvc-6nclv", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2413/persistentvolumeclaims/pvc-6nclv": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.309: INFO: Failed to get claim "pvc-6nclv", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2413/persistentvolumeclaims/pvc-6nclv": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.417: INFO: Failed to get claim "pvc-6nclv", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2413/persistentvolumeclaims/pvc-6nclv": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.525: INFO: Failed to get claim "pvc-6nclv", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2413/persistentvolumeclaims/pvc-6nclv": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:48.117: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:50.232: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:52.334: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:54.437: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:56.539: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
Jul 31 08:04:58.640: INFO: PersistentVolumeClaim pvc-6nclv found but phase is Pending instead of Bound.
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":43,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:40.372: INFO: Only supported for providers [gce gke] (not aws)
... skipping 17 lines ...
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
Jul 31 08:04:18.387: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:35.800: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.496: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.496: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.500: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Given a Pod with a 'name' label pod-adoption-release is created
Jul 31 08:05:16.775: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true)
Jul 31 08:05:18.878: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true)
... skipping 94 lines ...
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
Jul 31 08:04:18.695: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.053: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.804: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.805: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.804: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-e80ea486-f4b7-4546-96d4-1ad65ba9acb1
STEP: Creating a pod to test consume secrets
Jul 31 08:05:16.074: INFO: Waiting up to 5m0s for pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64" in namespace "secrets-2661" to be "Succeeded or Failed"
Jul 31 08:05:16.175: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 101.263288ms
Jul 31 08:05:18.278: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204088132s
Jul 31 08:05:20.382: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308328701s
Jul 31 08:05:22.486: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411864303s
Jul 31 08:05:24.587: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513433401s
Jul 31 08:05:26.689: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 10.615780837s
... skipping 5 lines ...
Jul 31 08:05:39.305: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 23.231383474s
Jul 31 08:05:41.410: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 25.336336094s
Jul 31 08:05:43.513: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 27.438968568s
Jul 31 08:05:45.615: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Pending", Reason="", readiness=false. Elapsed: 29.541407389s
Jul 31 08:05:47.723: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.649737926s
STEP: Saw pod success
Jul 31 08:05:47.723: INFO: Pod "pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64" satisfied condition "Succeeded or Failed"
Jul 31 08:05:47.825: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64 container secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:05:48.044: INFO: Waiting for pod pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64 to disappear
Jul 31 08:05:48.146: INFO: Pod pod-secrets-da872cff-45ed-40dc-b2c3-7b1b6630ac64 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:89.870 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}

S
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":8,"skipped":52,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:05:45.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apf
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:05:48.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "apf-9874" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":9,"skipped":52,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:05:48.768: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
Jul 31 08:04:12.171: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-43634r9xc
STEP: creating a claim
Jul 31 08:04:12.273: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-wnfw
STEP: Creating a pod to test exec-volume-test
Jul 31 08:04:12.583: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-wnfw" in namespace "volume-4363" to be "Succeeded or Failed"
Jul 31 08:04:12.689: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 106.6098ms
Jul 31 08:04:14.791: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208171392s
Jul 31 08:04:16.909: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325759372s
Jul 31 08:04:19.016: INFO: Get pod "exec-volume-test-dynamicpv-wnfw" in namespace "volume-4363" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4363/pods/exec-volume-test-dynamicpv-wnfw": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.568: INFO: Get pod "exec-volume-test-dynamicpv-wnfw" in namespace "volume-4363" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4363/pods/exec-volume-test-dynamicpv-wnfw": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.679: INFO: Get pod "exec-volume-test-dynamicpv-wnfw" in namespace "volume-4363" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4363/pods/exec-volume-test-dynamicpv-wnfw": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.789: INFO: Get pod "exec-volume-test-dynamicpv-wnfw" in namespace "volume-4363" failed, ignoring for 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volume-4363/pods/exec-volume-test-dynamicpv-wnfw": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:48.118: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 35.535607744s
Jul 31 08:04:50.233: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 37.650627563s
Jul 31 08:04:52.335: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 39.752401211s
Jul 31 08:04:54.437: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 41.854536072s
Jul 31 08:04:56.540: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 43.957114059s
Jul 31 08:04:58.642: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 46.058926679s
... skipping 11 lines ...
Jul 31 08:05:23.943: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.359743938s
Jul 31 08:05:26.045: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.461833337s
Jul 31 08:05:28.148: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.564792333s
Jul 31 08:05:30.250: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.667491658s
Jul 31 08:05:32.353: INFO: Pod "exec-volume-test-dynamicpv-wnfw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m19.770422844s
STEP: Saw pod success
Jul 31 08:05:32.353: INFO: Pod "exec-volume-test-dynamicpv-wnfw" satisfied condition "Succeeded or Failed"
Jul 31 08:05:32.454: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod exec-volume-test-dynamicpv-wnfw container exec-container-dynamicpv-wnfw: <nil>
STEP: delete the pod
Jul 31 08:05:32.670: INFO: Waiting for pod exec-volume-test-dynamicpv-wnfw to disappear
Jul 31 08:05:32.771: INFO: Pod exec-volume-test-dynamicpv-wnfw no longer exists
STEP: Deleting pod exec-volume-test-dynamicpv-wnfw
Jul 31 08:05:32.772: INFO: Deleting pod "exec-volume-test-dynamicpv-wnfw" in namespace "volume-4363"
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:05:49.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4060" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":6,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
• [SLOW TEST:34.723 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:920
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":6,"skipped":50,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 71 lines ...
Jul 31 08:05:43.758: INFO: PersistentVolumeClaim pvc-jr2zp found but phase is Pending instead of Bound.
Jul 31 08:05:45.862: INFO: PersistentVolumeClaim pvc-jr2zp found and phase=Bound (2.205330033s)
Jul 31 08:05:45.862: INFO: Waiting up to 3m0s for PersistentVolume local-qzttc to have phase Bound
Jul 31 08:05:45.964: INFO: PersistentVolume local-qzttc found and phase=Bound (102.063842ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jmzh
STEP: Creating a pod to test subpath
Jul 31 08:05:46.272: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jmzh" in namespace "provisioning-1222" to be "Succeeded or Failed"
Jul 31 08:05:46.379: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh": Phase="Pending", Reason="", readiness=false. Elapsed: 107.495777ms
Jul 31 08:05:48.482: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210491942s
Jul 31 08:05:50.590: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317800435s
Jul 31 08:05:52.693: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.421114497s
STEP: Saw pod success
Jul 31 08:05:52.693: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh" satisfied condition "Succeeded or Failed"
Jul 31 08:05:52.795: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-jmzh container test-container-subpath-preprovisionedpv-jmzh: <nil>
STEP: delete the pod
Jul 31 08:05:53.014: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jmzh to disappear
Jul 31 08:05:53.116: INFO: Pod pod-subpath-test-preprovisionedpv-jmzh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jmzh
Jul 31 08:05:53.116: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jmzh" in namespace "provisioning-1222"
STEP: Creating pod pod-subpath-test-preprovisionedpv-jmzh
STEP: Creating a pod to test subpath
Jul 31 08:05:53.321: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jmzh" in namespace "provisioning-1222" to be "Succeeded or Failed"
Jul 31 08:05:53.423: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh": Phase="Pending", Reason="", readiness=false. Elapsed: 101.986475ms
Jul 31 08:05:55.527: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205810067s
Jul 31 08:05:57.629: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308264063s
STEP: Saw pod success
Jul 31 08:05:57.629: INFO: Pod "pod-subpath-test-preprovisionedpv-jmzh" satisfied condition "Succeeded or Failed"
Jul 31 08:05:57.760: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-jmzh container test-container-subpath-preprovisionedpv-jmzh: <nil>
STEP: delete the pod
Jul 31 08:05:57.973: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jmzh to disappear
Jul 31 08:05:58.075: INFO: Pod pod-subpath-test-preprovisionedpv-jmzh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jmzh
Jul 31 08:05:58.075: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jmzh" in namespace "provisioning-1222"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":8,"skipped":94,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 18 lines ...
Jul 31 08:05:44.467: INFO: PersistentVolumeClaim pvc-72h7w found but phase is Pending instead of Bound.
Jul 31 08:05:46.570: INFO: PersistentVolumeClaim pvc-72h7w found and phase=Bound (2.20368821s)
Jul 31 08:05:46.570: INFO: Waiting up to 3m0s for PersistentVolume local-69knj to have phase Bound
Jul 31 08:05:46.673: INFO: PersistentVolume local-69knj found and phase=Bound (103.390717ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-8b2m
STEP: Creating a pod to test exec-volume-test
Jul 31 08:05:46.980: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-8b2m" in namespace "volume-5162" to be "Succeeded or Failed"
Jul 31 08:05:47.082: INFO: Pod "exec-volume-test-preprovisionedpv-8b2m": Phase="Pending", Reason="", readiness=false. Elapsed: 101.166847ms
Jul 31 08:05:49.184: INFO: Pod "exec-volume-test-preprovisionedpv-8b2m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204010285s
Jul 31 08:05:51.286: INFO: Pod "exec-volume-test-preprovisionedpv-8b2m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305821998s
Jul 31 08:05:53.389: INFO: Pod "exec-volume-test-preprovisionedpv-8b2m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408346767s
Jul 31 08:05:55.492: INFO: Pod "exec-volume-test-preprovisionedpv-8b2m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511906674s
Jul 31 08:05:57.595: INFO: Pod "exec-volume-test-preprovisionedpv-8b2m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.61401637s
STEP: Saw pod success
Jul 31 08:05:57.595: INFO: Pod "exec-volume-test-preprovisionedpv-8b2m" satisfied condition "Succeeded or Failed"
Jul 31 08:05:57.696: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-8b2m container exec-container-preprovisionedpv-8b2m: <nil>
STEP: delete the pod
Jul 31 08:05:57.960: INFO: Waiting for pod exec-volume-test-preprovisionedpv-8b2m to disappear
Jul 31 08:05:58.061: INFO: Pod exec-volume-test-preprovisionedpv-8b2m no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-8b2m
Jul 31 08:05:58.062: INFO: Deleting pod "exec-volume-test-preprovisionedpv-8b2m" in namespace "volume-5162"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:01.245: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 27 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-nqr9
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:05:34.481: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-nqr9" in namespace "subpath-6437" to be "Succeeded or Failed"
Jul 31 08:05:34.583: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Pending", Reason="", readiness=false. Elapsed: 101.942655ms
Jul 31 08:05:36.687: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205546911s
Jul 31 08:05:38.789: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307625735s
Jul 31 08:05:40.891: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.409817038s
Jul 31 08:05:42.993: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511941894s
Jul 31 08:05:45.099: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617215216s
... skipping 7 lines ...
Jul 31 08:06:01.969: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Running", Reason="", readiness=true. Elapsed: 27.487993454s
Jul 31 08:06:04.073: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Running", Reason="", readiness=true. Elapsed: 29.591240443s
Jul 31 08:06:06.176: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Running", Reason="", readiness=true. Elapsed: 31.694980672s
Jul 31 08:06:08.280: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Running", Reason="", readiness=true. Elapsed: 33.798528666s
Jul 31 08:06:10.383: INFO: Pod "pod-subpath-test-secret-nqr9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.901043546s
STEP: Saw pod success
Jul 31 08:06:10.383: INFO: Pod "pod-subpath-test-secret-nqr9" satisfied condition "Succeeded or Failed"
Jul 31 08:06:10.485: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-subpath-test-secret-nqr9 container test-container-subpath-secret-nqr9: <nil>
STEP: delete the pod
Jul 31 08:06:10.700: INFO: Waiting for pod pod-subpath-test-secret-nqr9 to disappear
Jul 31 08:06:10.802: INFO: Pod pod-subpath-test-secret-nqr9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-nqr9
Jul 31 08:06:10.802: INFO: Deleting pod "pod-subpath-test-secret-nqr9" in namespace "subpath-6437"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:11.153: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 38 lines ...
STEP: Creating a PVC followed by a PV
Jul 31 08:04:11.370: INFO: Waiting for PV local-qjr8q to bind to PVC pvc-dbzqd
Jul 31 08:04:11.370: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-dbzqd] to have phase Bound
Jul 31 08:04:11.474: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:13.577: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:15.681: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:18.978: INFO: Failed to get claim "pvc-dbzqd", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2419/persistentvolumeclaims/pvc-dbzqd": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.565: INFO: Failed to get claim "pvc-dbzqd", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2419/persistentvolumeclaims/pvc-dbzqd": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.674: INFO: Failed to get claim "pvc-dbzqd", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2419/persistentvolumeclaims/pvc-dbzqd": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.784: INFO: Failed to get claim "pvc-dbzqd", retrying in 2s. Error: Get "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces/volumemode-2419/persistentvolumeclaims/pvc-dbzqd": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:48.118: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:50.233: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:52.335: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:54.438: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:56.541: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
Jul 31 08:04:58.643: INFO: PersistentVolumeClaim pvc-dbzqd found but phase is Pending instead of Bound.
... skipping 33 lines ...
Jul 31 08:06:05.342: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c losetup -d ] Namespace:volumemode-2419 PodName:hostexec-ip-172-20-58-77.eu-west-2.compute.internal-94v86 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul 31 08:06:05.342: INFO: >>> kubeConfig: /root/.kube/config
Jul 31 08:06:06.190: INFO: exec ip-172-20-58-77.eu-west-2.compute.internal: command:   losetup -d 
Jul 31 08:06:06.190: INFO: exec ip-172-20-58-77.eu-west-2.compute.internal: stdout:    ""
Jul 31 08:06:06.190: INFO: exec ip-172-20-58-77.eu-west-2.compute.internal: stderr:    "losetup: option requires an argument -- 'd'\nTry 'losetup --help' for more information.\n"
Jul 31 08:06:06.190: INFO: exec ip-172-20-58-77.eu-west-2.compute.internal: exit code: 0
Jul 31 08:06:06.190: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 331 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Jul 31 08:06:06.190: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:161
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":3,"skipped":13,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:12.313: INFO: Driver local doesn't support ext3 -- skipping
... skipping 62 lines ...
• [SLOW TEST:12.434 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":5,"skipped":22,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:13.748: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 59 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":4,"skipped":56,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:05:38.941: INFO: >>> kubeConfig: /root/.kube/config
... skipping 14 lines ...
Jul 31 08:05:45.034: INFO: PersistentVolumeClaim pvc-skvzf found but phase is Pending instead of Bound.
Jul 31 08:05:47.137: INFO: PersistentVolumeClaim pvc-skvzf found and phase=Bound (4.306810686s)
Jul 31 08:05:47.137: INFO: Waiting up to 3m0s for PersistentVolume local-szjc9 to have phase Bound
Jul 31 08:05:47.238: INFO: PersistentVolume local-szjc9 found and phase=Bound (101.436784ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kprg
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:05:47.544: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kprg" in namespace "provisioning-7293" to be "Succeeded or Failed"
Jul 31 08:05:47.646: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Pending", Reason="", readiness=false. Elapsed: 101.348153ms
Jul 31 08:05:49.754: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209681066s
Jul 31 08:05:51.857: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3124217s
Jul 31 08:05:53.960: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415187593s
Jul 31 08:05:56.062: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Running", Reason="", readiness=true. Elapsed: 8.517845129s
Jul 31 08:05:58.165: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Running", Reason="", readiness=true. Elapsed: 10.62048355s
... skipping 2 lines ...
Jul 31 08:06:04.474: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Running", Reason="", readiness=true. Elapsed: 16.929152427s
Jul 31 08:06:06.579: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Running", Reason="", readiness=true. Elapsed: 19.034569331s
Jul 31 08:06:08.686: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Running", Reason="", readiness=true. Elapsed: 21.141402251s
Jul 31 08:06:10.791: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Running", Reason="", readiness=true. Elapsed: 23.246107163s
Jul 31 08:06:12.892: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.347619726s
STEP: Saw pod success
Jul 31 08:06:12.892: INFO: Pod "pod-subpath-test-preprovisionedpv-kprg" satisfied condition "Succeeded or Failed"
Jul 31 08:06:12.994: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-kprg container test-container-subpath-preprovisionedpv-kprg: <nil>
STEP: delete the pod
Jul 31 08:06:13.211: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kprg to disappear
Jul 31 08:06:13.312: INFO: Pod pod-subpath-test-preprovisionedpv-kprg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kprg
Jul 31 08:06:13.312: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kprg" in namespace "provisioning-7293"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":56,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:15.527: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 74 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:06:16.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9148" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":6,"skipped":59,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:16.923: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 31 08:06:13.071: INFO: The status of Pod server-envvars-69a3bb2d-d806-4e63-8de4-1454f60e8aa6 is Pending, waiting for it to be Running (with Ready = true)
Jul 31 08:06:15.173: INFO: The status of Pod server-envvars-69a3bb2d-d806-4e63-8de4-1454f60e8aa6 is Running (Ready = true)
Jul 31 08:06:15.485: INFO: Waiting up to 5m0s for pod "client-envvars-0612fc5b-499e-4339-b69e-60659cfac32c" in namespace "pods-76" to be "Succeeded or Failed"
Jul 31 08:06:15.587: INFO: Pod "client-envvars-0612fc5b-499e-4339-b69e-60659cfac32c": Phase="Pending", Reason="", readiness=false. Elapsed: 101.249502ms
Jul 31 08:06:17.689: INFO: Pod "client-envvars-0612fc5b-499e-4339-b69e-60659cfac32c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203431132s
STEP: Saw pod success
Jul 31 08:06:17.689: INFO: Pod "client-envvars-0612fc5b-499e-4339-b69e-60659cfac32c" satisfied condition "Succeeded or Failed"
Jul 31 08:06:17.804: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod client-envvars-0612fc5b-499e-4339-b69e-60659cfac32c container env3cont: <nil>
STEP: delete the pod
Jul 31 08:06:18.021: INFO: Waiting for pod client-envvars-0612fc5b-499e-4339-b69e-60659cfac32c to disappear
Jul 31 08:06:18.126: INFO: Pod client-envvars-0612fc5b-499e-4339-b69e-60659cfac32c no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.976 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":23,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:18.370: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 134 lines ...
Jul 31 08:05:40.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315532, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-84fd54d799\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 31 08:05:42.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315532, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-84fd54d799\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 31 08:05:44.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315532, loc:(*time.Location)(0x9ddf5a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63763315529, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-84fd54d799\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 31 08:05:46.443: INFO: Waiting up to 2m0s to get response from 100.66.214.72:8080
Jul 31 08:05:46.443: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3210 exec pause-pod-84fd54d799-8kr9v -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip'
Jul 31 08:06:17.641: INFO: rc: 28
Jul 31 08:06:17.641: INFO: got err: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3210 exec pause-pod-84fd54d799-8kr9v -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul 31 08:06:19.642: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3210 exec pause-pod-84fd54d799-8kr9v -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip'
Jul 31 08:06:20.888: INFO: rc: 7
Jul 31 08:06:20.888: INFO: got err: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3210 exec pause-pod-84fd54d799-8kr9v -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Jul 31 08:06:22.889: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3210 exec pause-pod-84fd54d799-8kr9v -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip'
Jul 31 08:06:24.084: INFO: rc: 7
Jul 31 08:06:24.084: INFO: got err: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3210 exec pause-pod-84fd54d799-8kr9v -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip
command terminated with exit code 7

error:
exit status 7, retry until timeout
Jul 31 08:06:26.085: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3210 exec pause-pod-84fd54d799-8kr9v -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip'
Jul 31 08:06:27.243: INFO: stderr: "+ curl -q -s --connect-timeout 30 100.66.214.72:8080/clientip\n"
Jul 31 08:06:27.243: INFO: stdout: "100.96.1.47:36102"
STEP: Verifying the preserved source ip
Jul 31 08:06:27.243: INFO: Waiting up to 2m0s to get response from 100.66.214.72:8080
... skipping 20 lines ...
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
Jul 31 08:04:18.841: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.308: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.951: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.952: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.951: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] should expand volume without restarting pod if nodeExpansion=off
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
STEP: Building a driver namespace object, basename csi-mock-volumes-530
STEP: Waiting for a default service account to be provisioned in namespace
STEP: deploying csi mock driver
... skipping 94 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":4,"skipped":44,"failed":1,"failures":["[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:29.143: INFO: Only supported for providers [azure] (not aws)
... skipping 211 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:06:28.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 40 lines ...
STEP: Destroying namespace "services-9996" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:31.075: INFO: Only supported for providers [gce gke] (not aws)
... skipping 42 lines ...
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:35.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
Jul 31 08:04:35.767: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:37.877: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:39.878: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] should expand volume by restarting pod if attach=off, nodeExpansion=on
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
STEP: Building a driver namespace object, basename csi-mock-volumes-8258
STEP: Waiting for a default service account to be provisioned in namespace
STEP: deploying csi mock driver
... skipping 123 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:06:32.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1904" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":6,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:32.341: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 42 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-70550d00-7ee8-47c9-b1c4-a75c474e68f5
STEP: Creating a pod to test consume configMaps
Jul 31 08:06:11.896: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e" in namespace "configmap-3677" to be "Succeeded or Failed"
Jul 31 08:06:11.998: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 101.805374ms
Jul 31 08:06:14.100: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204382602s
Jul 31 08:06:16.205: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309139736s
Jul 31 08:06:18.309: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41296872s
Jul 31 08:06:20.412: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51588017s
Jul 31 08:06:22.514: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617800547s
Jul 31 08:06:24.616: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.72010959s
Jul 31 08:06:26.719: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.823061488s
Jul 31 08:06:28.823: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.927118003s
Jul 31 08:06:30.927: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Pending", Reason="", readiness=false. Elapsed: 19.031050223s
Jul 31 08:06:33.030: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.13416032s
STEP: Saw pod success
Jul 31 08:06:33.030: INFO: Pod "pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e" satisfied condition "Succeeded or Failed"
Jul 31 08:06:33.132: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:06:33.345: INFO: Waiting for pod pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e to disappear
Jul 31 08:06:33.447: INFO: Pod pod-configmaps-e0b80960-b748-40aa-9117-d03d9269823e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:22.479 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":5,"skipped":65,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:05:55.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":65,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:37.075: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 58 lines ...
• [SLOW TEST:9.251 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: no PDB => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":5,"skipped":83,"failed":1,"failures":["[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]}

SSS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:06:18.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Jul 31 08:06:19.071: INFO: Waiting up to 5m0s for pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60" in namespace "containers-8599" to be "Succeeded or Failed"
Jul 31 08:06:19.172: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 100.907504ms
Jul 31 08:06:21.274: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202871572s
Jul 31 08:06:23.379: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307723689s
Jul 31 08:06:25.481: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410339859s
Jul 31 08:06:27.583: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512050197s
Jul 31 08:06:29.685: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 10.614184224s
Jul 31 08:06:31.788: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 12.716809994s
Jul 31 08:06:33.890: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 14.819457411s
Jul 31 08:06:35.992: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Pending", Reason="", readiness=false. Elapsed: 16.921592771s
Jul 31 08:06:38.106: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.035201102s
STEP: Saw pod success
Jul 31 08:06:38.106: INFO: Pod "client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60" satisfied condition "Succeeded or Failed"
Jul 31 08:06:38.207: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60 container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:06:38.426: INFO: Waiting for pod client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60 to disappear
Jul 31 08:06:38.528: INFO: Pod client-containers-3cdf22b7-99d1-46ef-9ecf-239c868aba60 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:20.296 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":36,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:38.760: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":6,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:05:23.000: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
Jul 31 08:05:29.331: INFO: PersistentVolumeClaim pvc-7srtf found but phase is Pending instead of Bound.
Jul 31 08:05:31.434: INFO: PersistentVolumeClaim pvc-7srtf found and phase=Bound (2.204625608s)
Jul 31 08:05:31.434: INFO: Waiting up to 3m0s for PersistentVolume local-dmk7k to have phase Bound
Jul 31 08:05:31.536: INFO: PersistentVolume local-dmk7k found and phase=Bound (101.75238ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-v9vj
STEP: Creating a pod to test subpath
Jul 31 08:05:31.843: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-v9vj" in namespace "provisioning-157" to be "Succeeded or Failed"
Jul 31 08:05:31.945: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 102.169973ms
Jul 31 08:05:34.049: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205855453s
Jul 31 08:05:36.152: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309220802s
Jul 31 08:05:38.255: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412179023s
Jul 31 08:05:40.357: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514550997s
Jul 31 08:05:42.461: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618376667s
... skipping 19 lines ...
Jul 31 08:06:24.542: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 52.699214416s
Jul 31 08:06:26.646: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 54.802862575s
Jul 31 08:06:28.754: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 56.910824338s
Jul 31 08:06:30.859: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Pending", Reason="", readiness=false. Elapsed: 59.016214987s
Jul 31 08:06:32.961: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m1.118622209s
STEP: Saw pod success
Jul 31 08:06:32.961: INFO: Pod "pod-subpath-test-preprovisionedpv-v9vj" satisfied condition "Succeeded or Failed"
Jul 31 08:06:33.064: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-v9vj container test-container-volume-preprovisionedpv-v9vj: <nil>
STEP: delete the pod
Jul 31 08:06:33.286: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-v9vj to disappear
Jul 31 08:06:33.388: INFO: Pod pod-subpath-test-preprovisionedpv-v9vj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-v9vj
Jul 31 08:06:33.388: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-v9vj" in namespace "provisioning-157"
... skipping 6 lines ...
Jul 31 08:06:33.797: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -r /tmp/local-driver-f59c5c68-5855-4e19-92c0-7829ada34028] Namespace:provisioning-157 PodName:hostexec-ip-172-20-61-108.eu-west-2.compute.internal-mh8js ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul 31 08:06:33.797: INFO: >>> kubeConfig: /root/.kube/config
Jul 31 08:06:34.528: INFO: exec ip-172-20-61-108.eu-west-2.compute.internal: command:   rm -r /tmp/local-driver-f59c5c68-5855-4e19-92c0-7829ada34028
Jul 31 08:06:34.528: INFO: exec ip-172-20-61-108.eu-west-2.compute.internal: stdout:    ""
Jul 31 08:06:34.528: INFO: exec ip-172-20-61-108.eu-west-2.compute.internal: stderr:    "rm: cannot remove '/tmp/local-driver-f59c5c68-5855-4e19-92c0-7829ada34028': No such file or directory\n"
Jul 31 08:06:34.528: INFO: exec ip-172-20-61-108.eu-west-2.compute.internal: exit code: 0
Jul 31 08:06:34.529: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "command terminated with exit code 1",
        },
        Code: 1,
    }
... skipping 314 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205

      Jul 31 08:06:34.529: Unexpected error:
          <exec.CodeExitError>: {
              Err: {
                  s: "command terminated with exit code 1",
              },
              Code: 1,
          }
          command terminated with exit code 1
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:210
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":49,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Jul 31 08:06:30.456: INFO: PersistentVolumeClaim pvc-brvlw found but phase is Pending instead of Bound.
Jul 31 08:06:32.558: INFO: PersistentVolumeClaim pvc-brvlw found and phase=Bound (2.203162569s)
Jul 31 08:06:32.558: INFO: Waiting up to 3m0s for PersistentVolume local-zwm24 to have phase Bound
Jul 31 08:06:32.659: INFO: PersistentVolume local-zwm24 found and phase=Bound (101.016354ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6rxd
STEP: Creating a pod to test subpath
Jul 31 08:06:32.964: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6rxd" in namespace "provisioning-5902" to be "Succeeded or Failed"
Jul 31 08:06:33.065: INFO: Pod "pod-subpath-test-preprovisionedpv-6rxd": Phase="Pending", Reason="", readiness=false. Elapsed: 101.200502ms
Jul 31 08:06:35.168: INFO: Pod "pod-subpath-test-preprovisionedpv-6rxd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204001111s
Jul 31 08:06:37.271: INFO: Pod "pod-subpath-test-preprovisionedpv-6rxd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.306667188s
STEP: Saw pod success
Jul 31 08:06:37.271: INFO: Pod "pod-subpath-test-preprovisionedpv-6rxd" satisfied condition "Succeeded or Failed"
Jul 31 08:06:37.373: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-6rxd container test-container-volume-preprovisionedpv-6rxd: <nil>
STEP: delete the pod
Jul 31 08:06:37.584: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6rxd to disappear
Jul 31 08:06:37.685: INFO: Pod pod-subpath-test-preprovisionedpv-6rxd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6rxd
Jul 31 08:06:37.685: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6rxd" in namespace "provisioning-5902"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":8,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:39.530: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 110 lines ...
Jul 31 08:05:33.888: INFO: PersistentVolumeClaim pvc-zbtcf found and phase=Bound (101.737567ms)
STEP: Deleting the previously created pod
Jul 31 08:06:05.404: INFO: Deleting pod "pvc-volume-tester-tqvj8" in namespace "csi-mock-volumes-6391"
Jul 31 08:06:05.508: INFO: Wait up to 5m0s for pod "pvc-volume-tester-tqvj8" to be fully deleted
STEP: Checking CSI driver logs
Jul 31 08:06:13.826: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6IklscGYyZXBLUDBfSHVETzJOekNsNWlGRkFZLTN2RU5HYWxDSGphYnFqaUUifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2Mjc3MTkzNTQsImlhdCI6MTYyNzcxODc1NCwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLWNlMTQ0ZTYxMmItODNjMGMudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtNjM5MSIsInBvZCI6eyJuYW1lIjoicHZjLXZvbHVtZS10ZXN0ZXItdHF2ajgiLCJ1aWQiOiI3N2M2NTM1MS0yNDE0LTRiZjktYTlhMS1kN2JkODFkMTEzMTYifSwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlZmF1bHQiLCJ1aWQiOiI0NTM5ZmVhNC03ZWNiLTQ5MzUtOTZjZC00Y2M2N2JiYzcxNDEifX0sIm5iZiI6MTYyNzcxODc1NCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNzaS1tb2NrLXZvbHVtZXMtNjM5MTpkZWZhdWx0In0.gC6dVs-0u-zwAmkEefTXmVjKX61kBmVLk8VEgt5d5pMCuaMgLCtMQunHXIOsYArilIdDf1cKArCcZjokCVrBi5NSipA55FIJl5GtD3sGYef76fRXAU_FdxnxqWLIGuTnxRTiL9x8Bdbxq3380WqI5B_ryW_jsCJcWWrfdL8ZnumSudjl-YH9c5haL7KrA71UsdtNaTbzKW_8J2Y6d9KGiLbMC9iL7CFBujDfH1dvmOi_b0NiKDqCctE0w8lcJkhKSUEqFQVz9GYGtWfVvDALsFX9DXF6mdQrgVpFeMQ_i9oGqhJQqs_RM6g3wpiZJ45-FYOh9bXbDHl4tO_F6A94bQ","expirationTimestamp":"2021-07-31T08:15:54Z"}}
Jul 31 08:06:13.826: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/77c65351-2414-4bf9-a9a1-d7bd81d11316/volumes/kubernetes.io~csi/pvc-9ad99c1e-7636-4d69-bcc3-83a46e86ee1d/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-tqvj8
Jul 31 08:06:13.826: INFO: Deleting pod "pvc-volume-tester-tqvj8" in namespace "csi-mock-volumes-6391"
STEP: Deleting claim pvc-zbtcf
Jul 31 08:06:14.133: INFO: Waiting up to 2m0s for PersistentVolume pvc-9ad99c1e-7636-4d69-bcc3-83a46e86ee1d to get deleted
Jul 31 08:06:14.236: INFO: PersistentVolume pvc-9ad99c1e-7636-4d69-bcc3-83a46e86ee1d found and phase=Released (102.588078ms)
Jul 31 08:06:16.339: INFO: PersistentVolume pvc-9ad99c1e-7636-4d69-bcc3-83a46e86ee1d found and phase=Released (2.205257644s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":6,"skipped":41,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:40.408: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:06:40.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9983" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":70,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:40.607: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":7,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}
[BeforeEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:06:40.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Jul 31 08:06:40.778: INFO: Waiting up to 5m0s for pod "var-expansion-2d1fc240-bfab-433d-8fb9-e401081515cb" in namespace "var-expansion-6245" to be "Succeeded or Failed"
Jul 31 08:06:40.880: INFO: Pod "var-expansion-2d1fc240-bfab-433d-8fb9-e401081515cb": Phase="Pending", Reason="", readiness=false. Elapsed: 101.331747ms
Jul 31 08:06:42.982: INFO: Pod "var-expansion-2d1fc240-bfab-433d-8fb9-e401081515cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203431028s
STEP: Saw pod success
Jul 31 08:06:42.982: INFO: Pod "var-expansion-2d1fc240-bfab-433d-8fb9-e401081515cb" satisfied condition "Succeeded or Failed"
Jul 31 08:06:43.083: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod var-expansion-2d1fc240-bfab-433d-8fb9-e401081515cb container dapi-container: <nil>
STEP: delete the pod
Jul 31 08:06:43.293: INFO: Waiting for pod var-expansion-2d1fc240-bfab-433d-8fb9-e401081515cb to disappear
Jul 31 08:06:43.394: INFO: Pod var-expansion-2d1fc240-bfab-433d-8fb9-e401081515cb no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:06:43.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6245" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":3,"skipped":7,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename fsgroupchangepolicy
Jul 31 08:04:18.758: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.308: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.868: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.868: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.867: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[It] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
Jul 31 08:05:15.867: INFO: Creating resource for dynamic PV
Jul 31 08:05:15.867: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass fsgroupchangepolicy-1290mmkt8
... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":5,"skipped":35,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:06:40.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:146
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Jul 31 08:06:41.340: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 31 08:06:47.548: INFO: deleting claim "volume-provisioning-5960"/"pvc-6vx6l"
... skipping 6 lines ...

• [SLOW TEST:7.340 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:824
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:825
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":8,"skipped":78,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:47.973: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:06:49.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-8777" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":7,"skipped":68,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}

SS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":4,"skipped":21,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:06:31.273: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Jul 31 08:06:43.819: INFO: PersistentVolumeClaim pvc-v2xrv found but phase is Pending instead of Bound.
Jul 31 08:06:45.921: INFO: PersistentVolumeClaim pvc-v2xrv found and phase=Bound (10.611381268s)
Jul 31 08:06:45.921: INFO: Waiting up to 3m0s for PersistentVolume local-tn6wp to have phase Bound
Jul 31 08:06:46.022: INFO: PersistentVolume local-tn6wp found and phase=Bound (101.163729ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rs9l
STEP: Creating a pod to test subpath
Jul 31 08:06:46.332: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rs9l" in namespace "provisioning-1713" to be "Succeeded or Failed"
Jul 31 08:06:46.434: INFO: Pod "pod-subpath-test-preprovisionedpv-rs9l": Phase="Pending", Reason="", readiness=false. Elapsed: 101.464391ms
Jul 31 08:06:48.537: INFO: Pod "pod-subpath-test-preprovisionedpv-rs9l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204351458s
Jul 31 08:06:50.640: INFO: Pod "pod-subpath-test-preprovisionedpv-rs9l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.307131184s
STEP: Saw pod success
Jul 31 08:06:50.640: INFO: Pod "pod-subpath-test-preprovisionedpv-rs9l" satisfied condition "Succeeded or Failed"
Jul 31 08:06:50.743: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-rs9l container test-container-volume-preprovisionedpv-rs9l: <nil>
STEP: delete the pod
Jul 31 08:06:50.954: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rs9l to disappear
Jul 31 08:06:51.056: INFO: Pod pod-subpath-test-preprovisionedpv-rs9l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rs9l
Jul 31 08:06:51.056: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rs9l" in namespace "provisioning-1713"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":21,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:52.685: INFO: Only supported for providers [gce gke] (not aws)
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:06:55.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-1209" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":6,"skipped":30,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:55.656: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Jul 31 08:06:44.088: INFO: PersistentVolumeClaim pvc-tk5w8 found but phase is Pending instead of Bound.
Jul 31 08:06:46.191: INFO: PersistentVolumeClaim pvc-tk5w8 found and phase=Bound (12.720847677s)
Jul 31 08:06:46.191: INFO: Waiting up to 3m0s for PersistentVolume local-sb5mw to have phase Bound
Jul 31 08:06:46.294: INFO: PersistentVolume local-sb5mw found and phase=Bound (102.204337ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-xsm4
STEP: Creating a pod to test exec-volume-test
Jul 31 08:06:46.601: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-xsm4" in namespace "volume-4396" to be "Succeeded or Failed"
Jul 31 08:06:46.703: INFO: Pod "exec-volume-test-preprovisionedpv-xsm4": Phase="Pending", Reason="", readiness=false. Elapsed: 101.931516ms
Jul 31 08:06:48.809: INFO: Pod "exec-volume-test-preprovisionedpv-xsm4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208086716s
Jul 31 08:06:50.913: INFO: Pod "exec-volume-test-preprovisionedpv-xsm4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311855415s
Jul 31 08:06:53.016: INFO: Pod "exec-volume-test-preprovisionedpv-xsm4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415310392s
Jul 31 08:06:55.121: INFO: Pod "exec-volume-test-preprovisionedpv-xsm4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.520235453s
STEP: Saw pod success
Jul 31 08:06:55.121: INFO: Pod "exec-volume-test-preprovisionedpv-xsm4" satisfied condition "Succeeded or Failed"
Jul 31 08:06:55.224: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-xsm4 container exec-container-preprovisionedpv-xsm4: <nil>
STEP: delete the pod
Jul 31 08:06:55.433: INFO: Waiting for pod exec-volume-test-preprovisionedpv-xsm4 to disappear
Jul 31 08:06:55.552: INFO: Pod exec-volume-test-preprovisionedpv-xsm4 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-xsm4
Jul 31 08:06:55.552: INFO: Deleting pod "exec-volume-test-preprovisionedpv-xsm4" in namespace "volume-4396"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":9,"skipped":95,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:57.139: INFO: Only supported for providers [vsphere] (not aws)
... skipping 43 lines ...
Jul 31 08:06:45.002: INFO: PersistentVolumeClaim pvc-ztddc found but phase is Pending instead of Bound.
Jul 31 08:06:47.105: INFO: PersistentVolumeClaim pvc-ztddc found and phase=Bound (2.204136593s)
Jul 31 08:06:47.105: INFO: Waiting up to 3m0s for PersistentVolume local-wnkj9 to have phase Bound
Jul 31 08:06:47.215: INFO: PersistentVolume local-wnkj9 found and phase=Bound (109.912645ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-zqhx
STEP: Creating a pod to test subpath
Jul 31 08:06:47.526: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-zqhx" in namespace "provisioning-2495" to be "Succeeded or Failed"
Jul 31 08:06:47.628: INFO: Pod "pod-subpath-test-preprovisionedpv-zqhx": Phase="Pending", Reason="", readiness=false. Elapsed: 102.103001ms
Jul 31 08:06:49.733: INFO: Pod "pod-subpath-test-preprovisionedpv-zqhx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206935566s
Jul 31 08:06:51.836: INFO: Pod "pod-subpath-test-preprovisionedpv-zqhx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310006705s
Jul 31 08:06:53.939: INFO: Pod "pod-subpath-test-preprovisionedpv-zqhx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413474579s
STEP: Saw pod success
Jul 31 08:06:53.939: INFO: Pod "pod-subpath-test-preprovisionedpv-zqhx" satisfied condition "Succeeded or Failed"
Jul 31 08:06:54.043: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-zqhx container test-container-subpath-preprovisionedpv-zqhx: <nil>
STEP: delete the pod
Jul 31 08:06:54.255: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-zqhx to disappear
Jul 31 08:06:54.357: INFO: Pod pod-subpath-test-preprovisionedpv-zqhx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-zqhx
Jul 31 08:06:54.357: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-zqhx" in namespace "provisioning-2495"
... skipping 50 lines ...
STEP: Destroying namespace "services-7709" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":8,"skipped":50,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:58.526: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
• [SLOW TEST:12.043 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":6,"skipped":38,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:06:59.185: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 24 lines ...
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:04:18.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
Jul 31 08:04:18.442: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:35.802: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:36.550: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:38.549: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
Jul 31 08:04:40.555: INFO: Unexpected error while creating namespace: Post "https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io/api/v1/namespaces": dial tcp 35.177.99.229:443: connect: connection refused
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
W0731 08:05:16.315118    4795 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
[It] should replace jobs when ReplaceConcurrent [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
... skipping 74 lines ...
Jul 31 08:06:02.806: INFO: PersistentVolumeClaim csi-hostpath5r9gh found but phase is Pending instead of Bound.
Jul 31 08:06:04.908: INFO: PersistentVolumeClaim csi-hostpath5r9gh found but phase is Pending instead of Bound.
Jul 31 08:06:07.011: INFO: PersistentVolumeClaim csi-hostpath5r9gh found but phase is Pending instead of Bound.
Jul 31 08:06:09.114: INFO: PersistentVolumeClaim csi-hostpath5r9gh found and phase=Bound (8.514557958s)
STEP: Expanding non-expandable pvc
Jul 31 08:06:09.319: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul 31 08:06:09.524: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:11.731: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:13.731: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:15.738: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:17.730: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:19.729: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:21.730: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:23.731: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:25.734: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:27.730: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:29.730: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:31.734: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:33.730: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:35.729: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:37.737: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:39.730: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 31 08:06:39.937: INFO: Error updating pvc csi-hostpath5r9gh: persistentvolumeclaims "csi-hostpath5r9gh" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul 31 08:06:39.937: INFO: Deleting PersistentVolumeClaim "csi-hostpath5r9gh"
Jul 31 08:06:40.041: INFO: Waiting up to 5m0s for PersistentVolume pvc-c1442937-cc86-4e04-9444-b563cc3d3ebc to get deleted
Jul 31 08:06:40.144: INFO: PersistentVolume pvc-c1442937-cc86-4e04-9444-b563cc3d3ebc was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-1672
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":7,"skipped":53,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}

SSSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":1,"skipped":20,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:01.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 32 lines ...
• [SLOW TEST:6.587 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":10,"skipped":98,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:03.551: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":52,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:06:58.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-687ccec7-83b8-400a-be58-491f6c612f96
STEP: Creating a pod to test consume secrets
Jul 31 08:06:58.802: INFO: Waiting up to 5m0s for pod "pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3" in namespace "secrets-7770" to be "Succeeded or Failed"
Jul 31 08:06:58.905: INFO: Pod "pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 102.038061ms
Jul 31 08:07:01.007: INFO: Pod "pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20487207s
Jul 31 08:07:03.112: INFO: Pod "pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309064656s
STEP: Saw pod success
Jul 31 08:07:03.112: INFO: Pod "pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3" satisfied condition "Succeeded or Failed"
Jul 31 08:07:03.214: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3 container secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:07:03.439: INFO: Waiting for pod pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3 to disappear
Jul 31 08:07:03.542: INFO: Pod pod-secrets-af3ec67d-fc11-4c26-82dd-0e30fb3dabf3 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.674 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":52,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:03.785: INFO: Only supported for providers [gce gke] (not aws)
... skipping 55 lines ...
Jul 31 08:06:32.878: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8767gs788
STEP: creating a claim
Jul 31 08:06:32.980: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-mtzg
STEP: Creating a pod to test subpath
Jul 31 08:06:33.292: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-mtzg" in namespace "provisioning-8767" to be "Succeeded or Failed"
Jul 31 08:06:33.393: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 100.982868ms
Jul 31 08:06:35.495: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203107457s
Jul 31 08:06:37.597: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304598806s
Jul 31 08:06:39.701: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408436622s
Jul 31 08:06:41.802: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510248365s
Jul 31 08:06:43.906: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.613649799s
Jul 31 08:06:46.008: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.715800372s
Jul 31 08:06:48.110: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.817895716s
Jul 31 08:06:50.215: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.922971111s
Jul 31 08:06:52.319: INFO: Pod "pod-subpath-test-dynamicpv-mtzg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.026820593s
STEP: Saw pod success
Jul 31 08:06:52.319: INFO: Pod "pod-subpath-test-dynamicpv-mtzg" satisfied condition "Succeeded or Failed"
Jul 31 08:06:52.422: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-mtzg container test-container-subpath-dynamicpv-mtzg: <nil>
STEP: delete the pod
Jul 31 08:06:52.641: INFO: Waiting for pod pod-subpath-test-dynamicpv-mtzg to disappear
Jul 31 08:06:52.744: INFO: Pod pod-subpath-test-dynamicpv-mtzg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-mtzg
Jul 31 08:06:52.744: INFO: Deleting pod "pod-subpath-test-dynamicpv-mtzg" in namespace "provisioning-8767"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":7,"skipped":31,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":2,"skipped":20,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:02.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:04.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2150" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":3,"skipped":20,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:04.520: INFO: >>> kubeConfig: /root/.kube/config
... skipping 119 lines ...
• [SLOW TEST:87.517 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:283
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":7,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:07.921: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
Jul 31 08:07:04.337: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 31 08:07:04.440: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-7srp
STEP: Creating a pod to test subpath
Jul 31 08:07:04.545: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-7srp" in namespace "provisioning-2990" to be "Succeeded or Failed"
Jul 31 08:07:04.648: INFO: Pod "pod-subpath-test-inlinevolume-7srp": Phase="Pending", Reason="", readiness=false. Elapsed: 102.978245ms
Jul 31 08:07:06.751: INFO: Pod "pod-subpath-test-inlinevolume-7srp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206282429s
Jul 31 08:07:08.855: INFO: Pod "pod-subpath-test-inlinevolume-7srp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.310300133s
STEP: Saw pod success
Jul 31 08:07:08.855: INFO: Pod "pod-subpath-test-inlinevolume-7srp" satisfied condition "Succeeded or Failed"
Jul 31 08:07:08.958: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-7srp container test-container-subpath-inlinevolume-7srp: <nil>
STEP: delete the pod
Jul 31 08:07:09.179: INFO: Waiting for pod pod-subpath-test-inlinevolume-7srp to disappear
Jul 31 08:07:09.281: INFO: Pod pod-subpath-test-inlinevolume-7srp no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-7srp
Jul 31 08:07:09.281: INFO: Deleting pod "pod-subpath-test-inlinevolume-7srp" in namespace "provisioning-2990"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":62,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:05.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 31 08:07:05.974: INFO: Waiting up to 5m0s for pod "pod-1a6c2778-aab1-4283-acff-791b473a9610" in namespace "emptydir-2600" to be "Succeeded or Failed"
Jul 31 08:07:06.078: INFO: Pod "pod-1a6c2778-aab1-4283-acff-791b473a9610": Phase="Pending", Reason="", readiness=false. Elapsed: 103.99279ms
Jul 31 08:07:08.182: INFO: Pod "pod-1a6c2778-aab1-4283-acff-791b473a9610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207461922s
Jul 31 08:07:10.285: INFO: Pod "pod-1a6c2778-aab1-4283-acff-791b473a9610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311191955s
STEP: Saw pod success
Jul 31 08:07:10.285: INFO: Pod "pod-1a6c2778-aab1-4283-acff-791b473a9610" satisfied condition "Succeeded or Failed"
Jul 31 08:07:10.401: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-1a6c2778-aab1-4283-acff-791b473a9610 container test-container: <nil>
STEP: delete the pod
Jul 31 08:07:10.620: INFO: Waiting for pod pod-1a6c2778-aab1-4283-acff-791b473a9610 to disappear
Jul 31 08:07:10.721: INFO: Pod pod-1a6c2778-aab1-4283-acff-791b473a9610 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.598 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":39,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:10.976: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 45 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-56bb86cb-221d-4f55-9336-33f3449d5821
STEP: Creating a pod to test consume configMaps
Jul 31 08:07:08.672: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4cbb9ce-20c9-4724-8f14-a7ea543d5589" in namespace "projected-7303" to be "Succeeded or Failed"
Jul 31 08:07:08.773: INFO: Pod "pod-projected-configmaps-c4cbb9ce-20c9-4724-8f14-a7ea543d5589": Phase="Pending", Reason="", readiness=false. Elapsed: 101.076476ms
Jul 31 08:07:10.883: INFO: Pod "pod-projected-configmaps-c4cbb9ce-20c9-4724-8f14-a7ea543d5589": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21073658s
STEP: Saw pod success
Jul 31 08:07:10.883: INFO: Pod "pod-projected-configmaps-c4cbb9ce-20c9-4724-8f14-a7ea543d5589" satisfied condition "Succeeded or Failed"
Jul 31 08:07:10.983: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-projected-configmaps-c4cbb9ce-20c9-4724-8f14-a7ea543d5589 container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:07:11.195: INFO: Waiting for pod pod-projected-configmaps-c4cbb9ce-20c9-4724-8f14-a7ea543d5589 to disappear
Jul 31 08:07:11.296: INFO: Pod pod-projected-configmaps-c4cbb9ce-20c9-4724-8f14-a7ea543d5589 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:11.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7303" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":54,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:11.551: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
STEP: Creating a pod to test emptydir subpath on tmpfs
Jul 31 08:07:02.822: INFO: Waiting up to 5m0s for pod "pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f" in namespace "emptydir-411" to be "Succeeded or Failed"
Jul 31 08:07:02.924: INFO: Pod "pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 102.006924ms
Jul 31 08:07:05.027: INFO: Pod "pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204927213s
Jul 31 08:07:07.130: INFO: Pod "pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308714123s
Jul 31 08:07:09.234: INFO: Pod "pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412534733s
Jul 31 08:07:11.338: INFO: Pod "pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.516169755s
STEP: Saw pod success
Jul 31 08:07:11.338: INFO: Pod "pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f" satisfied condition "Succeeded or Failed"
Jul 31 08:07:11.441: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f container test-container: <nil>
STEP: delete the pod
Jul 31 08:07:11.663: INFO: Waiting for pod pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f to disappear
Jul 31 08:07:11.766: INFO: Pod pod-ca50f2dd-a93c-4748-ba3d-2e8047398f0f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:63
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":8,"skipped":61,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:11.985: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:12.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2278" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":9,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:12.908: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 38 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 156 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":9,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:13.838: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 60 lines ...
Jul 31 08:06:59.860: INFO: PersistentVolumeClaim pvc-cp94x found but phase is Pending instead of Bound.
Jul 31 08:07:01.967: INFO: PersistentVolumeClaim pvc-cp94x found and phase=Bound (6.413690832s)
Jul 31 08:07:01.967: INFO: Waiting up to 3m0s for PersistentVolume local-qpdt7 to have phase Bound
Jul 31 08:07:02.069: INFO: PersistentVolume local-qpdt7 found and phase=Bound (101.731364ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9k4t
STEP: Creating a pod to test subpath
Jul 31 08:07:02.375: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9k4t" in namespace "provisioning-2564" to be "Succeeded or Failed"
Jul 31 08:07:02.477: INFO: Pod "pod-subpath-test-preprovisionedpv-9k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 101.608502ms
Jul 31 08:07:04.579: INFO: Pod "pod-subpath-test-preprovisionedpv-9k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20384261s
Jul 31 08:07:06.681: INFO: Pod "pod-subpath-test-preprovisionedpv-9k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30624127s
Jul 31 08:07:08.786: INFO: Pod "pod-subpath-test-preprovisionedpv-9k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411298314s
Jul 31 08:07:10.890: INFO: Pod "pod-subpath-test-preprovisionedpv-9k4t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514927366s
Jul 31 08:07:12.993: INFO: Pod "pod-subpath-test-preprovisionedpv-9k4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.61810326s
STEP: Saw pod success
Jul 31 08:07:12.993: INFO: Pod "pod-subpath-test-preprovisionedpv-9k4t" satisfied condition "Succeeded or Failed"
Jul 31 08:07:13.094: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-9k4t container test-container-volume-preprovisionedpv-9k4t: <nil>
STEP: delete the pod
Jul 31 08:07:13.306: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9k4t to disappear
Jul 31 08:07:13.409: INFO: Pod pod-subpath-test-preprovisionedpv-9k4t no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9k4t
Jul 31 08:07:13.409: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9k4t" in namespace "provisioning-2564"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":11,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:14.954: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Jul 31 08:07:12.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 31 08:07:12.635: INFO: Waiting up to 5m0s for pod "downward-api-b8ef3efd-9f28-43ef-a940-88de9604da0d" in namespace "downward-api-867" to be "Succeeded or Failed"
Jul 31 08:07:12.738: INFO: Pod "downward-api-b8ef3efd-9f28-43ef-a940-88de9604da0d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.722437ms
Jul 31 08:07:14.842: INFO: Pod "downward-api-b8ef3efd-9f28-43ef-a940-88de9604da0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207456786s
STEP: Saw pod success
Jul 31 08:07:14.842: INFO: Pod "downward-api-b8ef3efd-9f28-43ef-a940-88de9604da0d" satisfied condition "Succeeded or Failed"
Jul 31 08:07:14.945: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod downward-api-b8ef3efd-9f28-43ef-a940-88de9604da0d container dapi-container: <nil>
STEP: delete the pod
Jul 31 08:07:15.161: INFO: Waiting for pod downward-api-b8ef3efd-9f28-43ef-a940-88de9604da0d to disappear
Jul 31 08:07:15.264: INFO: Pod downward-api-b8ef3efd-9f28-43ef-a940-88de9604da0d no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:16.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3254" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":5,"skipped":13,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:16.491: INFO: Only supported for providers [azure] (not aws)
... skipping 63 lines ...
Jul 31 08:07:00.221: INFO: PersistentVolumeClaim pvc-cjvtr found but phase is Pending instead of Bound.
Jul 31 08:07:02.325: INFO: PersistentVolumeClaim pvc-cjvtr found and phase=Bound (6.418586887s)
Jul 31 08:07:02.325: INFO: Waiting up to 3m0s for PersistentVolume local-8cv9s to have phase Bound
Jul 31 08:07:02.429: INFO: PersistentVolume local-8cv9s found and phase=Bound (103.201404ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vz2s
STEP: Creating a pod to test subpath
Jul 31 08:07:02.739: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vz2s" in namespace "provisioning-8003" to be "Succeeded or Failed"
Jul 31 08:07:02.841: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 102.249745ms
Jul 31 08:07:04.944: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204769216s
Jul 31 08:07:07.046: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306983356s
Jul 31 08:07:09.150: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410926245s
Jul 31 08:07:11.253: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513491112s
Jul 31 08:07:13.357: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61815886s
Jul 31 08:07:15.461: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.721510097s
STEP: Saw pod success
Jul 31 08:07:15.461: INFO: Pod "pod-subpath-test-preprovisionedpv-vz2s" satisfied condition "Succeeded or Failed"
Jul 31 08:07:15.572: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-vz2s container test-container-subpath-preprovisionedpv-vz2s: <nil>
STEP: delete the pod
Jul 31 08:07:15.795: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vz2s to disappear
Jul 31 08:07:15.897: INFO: Pod pod-subpath-test-preprovisionedpv-vz2s no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vz2s
Jul 31 08:07:15.897: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vz2s" in namespace "provisioning-8003"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":9,"skipped":80,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}

SSS
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":10,"skipped":56,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:05:50.231: INFO: >>> kubeConfig: /root/.kube/config
... skipping 106 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":11,"skipped":56,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:17.614: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:18.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-9013" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":12,"skipped":61,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:16.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Jul 31 08:07:17.158: INFO: Waiting up to 5m0s for pod "downward-api-9d055cd5-070d-4660-bb1e-f7a3105e4a7e" in namespace "downward-api-4978" to be "Succeeded or Failed"
Jul 31 08:07:17.260: INFO: Pod "downward-api-9d055cd5-070d-4660-bb1e-f7a3105e4a7e": Phase="Pending", Reason="", readiness=false. Elapsed: 101.341799ms
Jul 31 08:07:19.362: INFO: Pod "downward-api-9d055cd5-070d-4660-bb1e-f7a3105e4a7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.203902623s
STEP: Saw pod success
Jul 31 08:07:19.362: INFO: Pod "downward-api-9d055cd5-070d-4660-bb1e-f7a3105e4a7e" satisfied condition "Succeeded or Failed"
Jul 31 08:07:19.465: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod downward-api-9d055cd5-070d-4660-bb1e-f7a3105e4a7e container dapi-container: <nil>
STEP: delete the pod
Jul 31 08:07:19.690: INFO: Waiting for pod downward-api-9d055cd5-070d-4660-bb1e-f7a3105e4a7e to disappear
Jul 31 08:07:19.799: INFO: Pod downward-api-9d055cd5-070d-4660-bb1e-f7a3105e4a7e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:19.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4978" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":6,"skipped":27,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:20.032: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:20.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7852" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":13,"skipped":67,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:20.479: INFO: Only supported for providers [gce gke] (not aws)
... skipping 125 lines ...
Jul 31 08:07:13.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 31 08:07:13.654: INFO: Waiting up to 5m0s for pod "pod-bf46d068-f921-4b64-b938-65ea585ec208" in namespace "emptydir-8970" to be "Succeeded or Failed"
Jul 31 08:07:13.755: INFO: Pod "pod-bf46d068-f921-4b64-b938-65ea585ec208": Phase="Pending", Reason="", readiness=false. Elapsed: 101.817209ms
Jul 31 08:07:15.858: INFO: Pod "pod-bf46d068-f921-4b64-b938-65ea585ec208": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204118511s
Jul 31 08:07:17.960: INFO: Pod "pod-bf46d068-f921-4b64-b938-65ea585ec208": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306359374s
Jul 31 08:07:20.065: INFO: Pod "pod-bf46d068-f921-4b64-b938-65ea585ec208": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411703999s
Jul 31 08:07:22.167: INFO: Pod "pod-bf46d068-f921-4b64-b938-65ea585ec208": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.513150259s
STEP: Saw pod success
Jul 31 08:07:22.167: INFO: Pod "pod-bf46d068-f921-4b64-b938-65ea585ec208" satisfied condition "Succeeded or Failed"
Jul 31 08:07:22.268: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-bf46d068-f921-4b64-b938-65ea585ec208 container test-container: <nil>
STEP: delete the pod
Jul 31 08:07:22.495: INFO: Waiting for pod pod-bf46d068-f921-4b64-b938-65ea585ec208 to disappear
Jul 31 08:07:22.596: INFO: Pod pod-bf46d068-f921-4b64-b938-65ea585ec208 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.760 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:22.818: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
Jul 31 08:07:00.917: INFO: PersistentVolumeClaim pvc-vskd5 found and phase=Bound (8.510017704s)
Jul 31 08:07:00.917: INFO: Waiting up to 3m0s for PersistentVolume nfs-l2qrw to have phase Bound
Jul 31 08:07:01.018: INFO: PersistentVolume nfs-l2qrw found and phase=Bound (100.859188ms)
STEP: Checking pod has write access to PersistentVolume
Jul 31 08:07:01.222: INFO: Creating nfs test pod
Jul 31 08:07:01.324: INFO: Pod should terminate with exitcode 0 (success)
Jul 31 08:07:01.324: INFO: Waiting up to 5m0s for pod "pvc-tester-gbkhz" in namespace "pv-7226" to be "Succeeded or Failed"
Jul 31 08:07:01.425: INFO: Pod "pvc-tester-gbkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 100.939774ms
Jul 31 08:07:03.528: INFO: Pod "pvc-tester-gbkhz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204501463s
Jul 31 08:07:05.630: INFO: Pod "pvc-tester-gbkhz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.30619801s
STEP: Saw pod success
Jul 31 08:07:05.630: INFO: Pod "pvc-tester-gbkhz" satisfied condition "Succeeded or Failed"
Jul 31 08:07:05.630: INFO: Pod pvc-tester-gbkhz succeeded 
Jul 31 08:07:05.630: INFO: Deleting pod "pvc-tester-gbkhz" in namespace "pv-7226"
Jul 31 08:07:05.736: INFO: Wait up to 5m0s for pod "pvc-tester-gbkhz" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul 31 08:07:05.837: INFO: Deleting PVC pvc-vskd5 to trigger reclamation of PV 
Jul 31 08:07:05.837: INFO: Deleting PersistentVolumeClaim "pvc-vskd5"
... skipping 25 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":6,"skipped":37,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:22.885: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 79 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 31 08:07:18.635: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ad37167d-2a82-45ee-a6d8-d50f769691d9"
Jul 31 08:07:18.635: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ad37167d-2a82-45ee-a6d8-d50f769691d9" in namespace "pods-2434" to be "terminated due to deadline exceeded"
Jul 31 08:07:18.736: INFO: Pod "pod-update-activedeadlineseconds-ad37167d-2a82-45ee-a6d8-d50f769691d9": Phase="Running", Reason="", readiness=true. Elapsed: 100.502542ms
Jul 31 08:07:20.837: INFO: Pod "pod-update-activedeadlineseconds-ad37167d-2a82-45ee-a6d8-d50f769691d9": Phase="Running", Reason="", readiness=true. Elapsed: 2.201916595s
Jul 31 08:07:22.938: INFO: Pod "pod-update-activedeadlineseconds-ad37167d-2a82-45ee-a6d8-d50f769691d9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.303130708s
Jul 31 08:07:22.938: INFO: Pod "pod-update-activedeadlineseconds-ad37167d-2a82-45ee-a6d8-d50f769691d9" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:22.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2434" for this suite.

... skipping 7 lines ...
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:22.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-6bb8f6f7-6712-4336-8f7c-c2a7b7cee528
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:23.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2543" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":7,"skipped":53,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 25 lines ...
Jul 31 08:07:15.024: INFO: PersistentVolumeClaim pvc-sssn8 found but phase is Pending instead of Bound.
Jul 31 08:07:17.126: INFO: PersistentVolumeClaim pvc-sssn8 found and phase=Bound (12.72947788s)
Jul 31 08:07:17.126: INFO: Waiting up to 3m0s for PersistentVolume local-gdgmq to have phase Bound
Jul 31 08:07:17.228: INFO: PersistentVolume local-gdgmq found and phase=Bound (101.796434ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ncnb
STEP: Creating a pod to test subpath
Jul 31 08:07:17.536: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ncnb" in namespace "provisioning-6327" to be "Succeeded or Failed"
Jul 31 08:07:17.638: INFO: Pod "pod-subpath-test-preprovisionedpv-ncnb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.096595ms
Jul 31 08:07:19.742: INFO: Pod "pod-subpath-test-preprovisionedpv-ncnb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206525726s
STEP: Saw pod success
Jul 31 08:07:19.743: INFO: Pod "pod-subpath-test-preprovisionedpv-ncnb" satisfied condition "Succeeded or Failed"
Jul 31 08:07:19.844: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-ncnb container test-container-volume-preprovisionedpv-ncnb: <nil>
STEP: delete the pod
Jul 31 08:07:20.065: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ncnb to disappear
Jul 31 08:07:20.168: INFO: Pod pod-subpath-test-preprovisionedpv-ncnb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ncnb
Jul 31 08:07:20.168: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ncnb" in namespace "provisioning-6327"
... skipping 95 lines ...
Jul 31 08:06:59.812: INFO: PersistentVolumeClaim pvc-4sxf6 found but phase is Pending instead of Bound.
Jul 31 08:07:01.914: INFO: PersistentVolumeClaim pvc-4sxf6 found and phase=Bound (8.516944023s)
Jul 31 08:07:01.914: INFO: Waiting up to 3m0s for PersistentVolume local-cmjcb to have phase Bound
Jul 31 08:07:02.015: INFO: PersistentVolume local-cmjcb found and phase=Bound (101.208019ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-6ctz
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:07:02.328: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-6ctz" in namespace "provisioning-8494" to be "Succeeded or Failed"
Jul 31 08:07:02.430: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Pending", Reason="", readiness=false. Elapsed: 101.550585ms
Jul 31 08:07:04.531: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203390142s
Jul 31 08:07:06.633: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305040601s
Jul 31 08:07:08.736: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 6.407705232s
Jul 31 08:07:10.845: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 8.516995189s
Jul 31 08:07:12.947: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 10.618943151s
Jul 31 08:07:15.050: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 12.721607785s
Jul 31 08:07:17.156: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 14.827872452s
Jul 31 08:07:19.260: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 16.932210202s
Jul 31 08:07:21.362: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 19.033814387s
Jul 31 08:07:23.464: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Running", Reason="", readiness=true. Elapsed: 21.135760338s
Jul 31 08:07:25.567: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.238580265s
STEP: Saw pod success
Jul 31 08:07:25.567: INFO: Pod "pod-subpath-test-preprovisionedpv-6ctz" satisfied condition "Succeeded or Failed"
Jul 31 08:07:25.668: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-6ctz container test-container-subpath-preprovisionedpv-6ctz: <nil>
STEP: delete the pod
Jul 31 08:07:25.879: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-6ctz to disappear
Jul 31 08:07:25.980: INFO: Pod pod-subpath-test-preprovisionedpv-6ctz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-6ctz
Jul 31 08:07:25.980: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-6ctz" in namespace "provisioning-8494"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":8,"skipped":70,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:27.507: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 58 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":55,"failed":0}
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:23.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-51be580f-813e-4be6-8d6d-3a40a44bfd97
STEP: Creating a pod to test consume secrets
Jul 31 08:07:24.734: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-87a7af40-7299-4e2a-86f0-315b0a7d14ab" in namespace "projected-4249" to be "Succeeded or Failed"
Jul 31 08:07:24.837: INFO: Pod "pod-projected-secrets-87a7af40-7299-4e2a-86f0-315b0a7d14ab": Phase="Pending", Reason="", readiness=false. Elapsed: 102.498779ms
Jul 31 08:07:26.943: INFO: Pod "pod-projected-secrets-87a7af40-7299-4e2a-86f0-315b0a7d14ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208147473s
STEP: Saw pod success
Jul 31 08:07:26.943: INFO: Pod "pod-projected-secrets-87a7af40-7299-4e2a-86f0-315b0a7d14ab" satisfied condition "Succeeded or Failed"
Jul 31 08:07:27.044: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-projected-secrets-87a7af40-7299-4e2a-86f0-315b0a7d14ab container secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:07:27.260: INFO: Waiting for pod pod-projected-secrets-87a7af40-7299-4e2a-86f0-315b0a7d14ab to disappear
Jul 31 08:07:27.362: INFO: Pod pod-projected-secrets-87a7af40-7299-4e2a-86f0-315b0a7d14ab no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:27.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4249" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":55,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:17.255 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:583
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":10,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:31.210: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:32.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-7021" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":11,"skipped":71,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:32.332: INFO: Only supported for providers [openstack] (not aws)
... skipping 245 lines ...
Jul 31 08:07:27.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Jul 31 08:07:28.180: INFO: Waiting up to 5m0s for pod "var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88" in namespace "var-expansion-3944" to be "Succeeded or Failed"
Jul 31 08:07:28.282: INFO: Pod "var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88": Phase="Pending", Reason="", readiness=false. Elapsed: 102.050801ms
Jul 31 08:07:30.385: INFO: Pod "var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204264925s
Jul 31 08:07:32.487: INFO: Pod "var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.306756488s
STEP: Saw pod success
Jul 31 08:07:32.487: INFO: Pod "var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88" satisfied condition "Succeeded or Failed"
Jul 31 08:07:32.594: INFO: Trying to get logs from node ip-172-20-54-176.eu-west-2.compute.internal pod var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88 container dapi-container: <nil>
STEP: delete the pod
Jul 31 08:07:32.806: INFO: Waiting for pod var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88 to disappear
Jul 31 08:07:32.907: INFO: Pod var-expansion-3bab61fc-12b5-4fe1-ba3a-b3992eea4d88 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":3,"skipped":27,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes GCEPD
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
Jul 31 08:07:34.336: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.722 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:142

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 4 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-fd288e1f-ab86-452e-8497-7f860450eede
STEP: Creating a pod to test consume secrets
Jul 31 08:07:33.278: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b1906c58-f178-4c0b-a9e5-d34082d72ec2" in namespace "projected-5048" to be "Succeeded or Failed"
Jul 31 08:07:33.379: INFO: Pod "pod-projected-secrets-b1906c58-f178-4c0b-a9e5-d34082d72ec2": Phase="Pending", Reason="", readiness=false. Elapsed: 101.041537ms
Jul 31 08:07:35.480: INFO: Pod "pod-projected-secrets-b1906c58-f178-4c0b-a9e5-d34082d72ec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.202490793s
STEP: Saw pod success
Jul 31 08:07:35.481: INFO: Pod "pod-projected-secrets-b1906c58-f178-4c0b-a9e5-d34082d72ec2" satisfied condition "Succeeded or Failed"
Jul 31 08:07:35.582: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-projected-secrets-b1906c58-f178-4c0b-a9e5-d34082d72ec2 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 31 08:07:35.793: INFO: Waiting for pod pod-projected-secrets-b1906c58-f178-4c0b-a9e5-d34082d72ec2 to disappear
Jul 31 08:07:35.899: INFO: Pod pod-projected-secrets-b1906c58-f178-4c0b-a9e5-d34082d72ec2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:35.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5048" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":113,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":-1,"completed":14,"skipped":81,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:21.721: INFO: >>> kubeConfig: /root/.kube/config
... skipping 12 lines ...
Jul 31 08:07:29.814: INFO: PersistentVolumeClaim pvc-wcxxr found but phase is Pending instead of Bound.
Jul 31 08:07:31.916: INFO: PersistentVolumeClaim pvc-wcxxr found and phase=Bound (2.203704191s)
Jul 31 08:07:31.916: INFO: Waiting up to 3m0s for PersistentVolume local-9xrq2 to have phase Bound
Jul 31 08:07:32.017: INFO: PersistentVolume local-9xrq2 found and phase=Bound (101.047218ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nsz7
STEP: Creating a pod to test subpath
Jul 31 08:07:32.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nsz7" in namespace "provisioning-7803" to be "Succeeded or Failed"
Jul 31 08:07:32.425: INFO: Pod "pod-subpath-test-preprovisionedpv-nsz7": Phase="Pending", Reason="", readiness=false. Elapsed: 101.248808ms
Jul 31 08:07:34.527: INFO: Pod "pod-subpath-test-preprovisionedpv-nsz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203420909s
Jul 31 08:07:36.629: INFO: Pod "pod-subpath-test-preprovisionedpv-nsz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305789256s
STEP: Saw pod success
Jul 31 08:07:36.629: INFO: Pod "pod-subpath-test-preprovisionedpv-nsz7" satisfied condition "Succeeded or Failed"
Jul 31 08:07:36.733: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-nsz7 container test-container-subpath-preprovisionedpv-nsz7: <nil>
STEP: delete the pod
Jul 31 08:07:36.963: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nsz7 to disappear
Jul 31 08:07:37.064: INFO: Pod pod-subpath-test-preprovisionedpv-nsz7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nsz7
Jul 31 08:07:37.064: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nsz7" in namespace "provisioning-7803"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":15,"skipped":81,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:38.593: INFO: Only supported for providers [vsphere] (not aws)
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":11,"skipped":56,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:40.350: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
• [SLOW TEST:29.771 seconds]
[sig-storage] Mounted volume expand
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Should verify mounted devices can be resized
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/mounted_volume_resize.go:122
------------------------------
{"msg":"PASSED [sig-storage] Mounted volume expand Should verify mounted devices can be resized","total":-1,"completed":5,"skipped":43,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Jul 31 08:07:14.126: INFO: PersistentVolumeClaim pvc-dmwdw found but phase is Pending instead of Bound.
Jul 31 08:07:16.232: INFO: PersistentVolumeClaim pvc-dmwdw found and phase=Bound (8.513507546s)
Jul 31 08:07:16.232: INFO: Waiting up to 3m0s for PersistentVolume local-52kht to have phase Bound
Jul 31 08:07:16.333: INFO: PersistentVolume local-52kht found and phase=Bound (101.090947ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-2frh
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:07:16.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-2frh" in namespace "provisioning-1891" to be "Succeeded or Failed"
Jul 31 08:07:16.750: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Pending", Reason="", readiness=false. Elapsed: 108.470286ms
Jul 31 08:07:18.854: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212124614s
Jul 31 08:07:20.956: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314352775s
Jul 31 08:07:23.058: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Running", Reason="", readiness=true. Elapsed: 6.416503615s
Jul 31 08:07:25.161: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Running", Reason="", readiness=true. Elapsed: 8.519360598s
Jul 31 08:07:27.265: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Running", Reason="", readiness=true. Elapsed: 10.623534797s
... skipping 2 lines ...
Jul 31 08:07:33.585: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Running", Reason="", readiness=true. Elapsed: 16.94320501s
Jul 31 08:07:35.687: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Running", Reason="", readiness=true. Elapsed: 19.045340667s
Jul 31 08:07:37.789: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Running", Reason="", readiness=true. Elapsed: 21.147396564s
Jul 31 08:07:39.895: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Running", Reason="", readiness=true. Elapsed: 23.25387618s
Jul 31 08:07:42.002: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.360718119s
STEP: Saw pod success
Jul 31 08:07:42.002: INFO: Pod "pod-subpath-test-preprovisionedpv-2frh" satisfied condition "Succeeded or Failed"
Jul 31 08:07:42.103: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-2frh container test-container-subpath-preprovisionedpv-2frh: <nil>
STEP: delete the pod
Jul 31 08:07:42.313: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-2frh to disappear
Jul 31 08:07:42.415: INFO: Pod pod-subpath-test-preprovisionedpv-2frh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-2frh
Jul 31 08:07:42.415: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-2frh" in namespace "provisioning-1891"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":7,"skipped":31,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:44.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4428" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":12,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:44.289: INFO: Driver local doesn't support ext4 -- skipping
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-f7sg
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:07:18.263: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-f7sg" in namespace "subpath-4973" to be "Succeeded or Failed"
Jul 31 08:07:18.366: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Pending", Reason="", readiness=false. Elapsed: 102.157982ms
Jul 31 08:07:20.469: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205152979s
Jul 31 08:07:22.573: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309001384s
Jul 31 08:07:24.683: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418979816s
Jul 31 08:07:26.786: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Running", Reason="", readiness=true. Elapsed: 8.52200373s
Jul 31 08:07:28.889: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Running", Reason="", readiness=true. Elapsed: 10.625750116s
... skipping 3 lines ...
Jul 31 08:07:37.310: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Running", Reason="", readiness=true. Elapsed: 19.046643538s
Jul 31 08:07:39.413: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Running", Reason="", readiness=true. Elapsed: 21.149259994s
Jul 31 08:07:41.517: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Running", Reason="", readiness=true. Elapsed: 23.253126347s
Jul 31 08:07:43.627: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Running", Reason="", readiness=true. Elapsed: 25.363744376s
Jul 31 08:07:45.730: INFO: Pod "pod-subpath-test-downwardapi-f7sg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.46658243s
STEP: Saw pod success
Jul 31 08:07:45.730: INFO: Pod "pod-subpath-test-downwardapi-f7sg" satisfied condition "Succeeded or Failed"
Jul 31 08:07:45.833: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-subpath-test-downwardapi-f7sg container test-container-subpath-downwardapi-f7sg: <nil>
STEP: delete the pod
Jul 31 08:07:46.054: INFO: Waiting for pod pod-subpath-test-downwardapi-f7sg to disappear
Jul 31 08:07:46.156: INFO: Pod pod-subpath-test-downwardapi-f7sg no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-f7sg
Jul 31 08:07:46.156: INFO: Deleting pod "pod-subpath-test-downwardapi-f7sg" in namespace "subpath-4973"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":83,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:46.489: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:49.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8248" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":11,"skipped":86,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:49.853: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-6a97485e-7b30-4aaa-bbf8-d2d0775f9eed
STEP: Creating a pod to test consume configMaps
Jul 31 08:07:44.703: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11" in namespace "projected-3830" to be "Succeeded or Failed"
Jul 31 08:07:44.807: INFO: Pod "pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11": Phase="Pending", Reason="", readiness=false. Elapsed: 103.979408ms
Jul 31 08:07:46.908: INFO: Pod "pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205677009s
Jul 31 08:07:49.014: INFO: Pod "pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311289068s
Jul 31 08:07:51.116: INFO: Pod "pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.412807778s
STEP: Saw pod success
Jul 31 08:07:51.116: INFO: Pod "pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11" satisfied condition "Succeeded or Failed"
Jul 31 08:07:51.217: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11 container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:07:51.430: INFO: Waiting for pod pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11 to disappear
Jul 31 08:07:51.533: INFO: Pod pod-projected-configmaps-31b6e520-1a77-4562-ae3c-f14194bf6f11 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.753 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":36,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:51.761: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":64,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:23.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:29.817 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":11,"skipped":64,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 17 lines ...
Jul 31 08:07:44.416: INFO: PersistentVolumeClaim pvc-h5jbx found but phase is Pending instead of Bound.
Jul 31 08:07:46.517: INFO: PersistentVolumeClaim pvc-h5jbx found and phase=Bound (6.409007185s)
Jul 31 08:07:46.517: INFO: Waiting up to 3m0s for PersistentVolume local-bq9wg to have phase Bound
Jul 31 08:07:46.618: INFO: PersistentVolume local-bq9wg found and phase=Bound (100.845202ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-x6dv
STEP: Creating a pod to test subpath
Jul 31 08:07:46.924: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-x6dv" in namespace "provisioning-6557" to be "Succeeded or Failed"
Jul 31 08:07:47.025: INFO: Pod "pod-subpath-test-preprovisionedpv-x6dv": Phase="Pending", Reason="", readiness=false. Elapsed: 101.055047ms
Jul 31 08:07:49.127: INFO: Pod "pod-subpath-test-preprovisionedpv-x6dv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202933245s
Jul 31 08:07:51.230: INFO: Pod "pod-subpath-test-preprovisionedpv-x6dv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305799255s
STEP: Saw pod success
Jul 31 08:07:51.230: INFO: Pod "pod-subpath-test-preprovisionedpv-x6dv" satisfied condition "Succeeded or Failed"
Jul 31 08:07:51.331: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-x6dv container test-container-volume-preprovisionedpv-x6dv: <nil>
STEP: delete the pod
Jul 31 08:07:51.541: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-x6dv to disappear
Jul 31 08:07:51.642: INFO: Pod pod-subpath-test-preprovisionedpv-x6dv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-x6dv
Jul 31 08:07:51.642: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-x6dv" in namespace "provisioning-6557"
... skipping 29 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Jul 31 08:07:53.612: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-1556" to be "Succeeded or Failed"
Jul 31 08:07:53.715: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 102.566497ms
Jul 31 08:07:55.818: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206150754s
Jul 31 08:07:55.818: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:55.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1556" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":12,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:56.173: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:57.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-6544" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":13,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:07:57.744: INFO: Only supported for providers [openstack] (not aws)
... skipping 157 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume by restarting pod if attach=on, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:57.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-e2bb251c-af42-472f-9d1e-73e892757e43
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:07:58.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9373" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":7,"skipped":33,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
... skipping 107 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":10,"skipped":63,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:08:01.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9074" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":8,"skipped":45,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":55,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:03.922: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 91 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-d63840c8-b92f-4d41-a32b-c0d6c7126f94
STEP: Creating a pod to test consume configMaps
Jul 31 08:07:58.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833" in namespace "configmap-4254" to be "Succeeded or Failed"
Jul 31 08:07:58.612: INFO: Pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833": Phase="Pending", Reason="", readiness=false. Elapsed: 106.745489ms
Jul 31 08:08:00.717: INFO: Pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211865911s
Jul 31 08:08:02.822: INFO: Pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317007273s
Jul 31 08:08:04.927: INFO: Pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421929755s
Jul 31 08:08:07.031: INFO: Pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833": Phase="Pending", Reason="", readiness=false. Elapsed: 8.525603684s
Jul 31 08:08:09.134: INFO: Pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.629087075s
STEP: Saw pod success
Jul 31 08:08:09.134: INFO: Pod "pod-configmaps-a0931624-74c1-4b50-83b1-221814560833" satisfied condition "Succeeded or Failed"
Jul 31 08:08:09.240: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-configmaps-a0931624-74c1-4b50-83b1-221814560833 container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:08:09.483: INFO: Waiting for pod pod-configmaps-a0931624-74c1-4b50-83b1-221814560833 to disappear
Jul 31 08:08:09.585: INFO: Pod pod-configmaps-a0931624-74c1-4b50-83b1-221814560833 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.013 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":88,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":28,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]"]}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:06:33.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 35 lines ...
Jul 31 08:06:37.589: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-5821
Jul 31 08:06:37.692: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-5821
Jul 31 08:06:37.795: INFO: creating *v1.StatefulSet: csi-mock-volumes-5821-9677/csi-mockplugin
Jul 31 08:06:37.900: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-5821
Jul 31 08:06:38.003: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-5821"
Jul 31 08:06:38.115: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-5821 to register on node ip-172-20-61-108.eu-west-2.compute.internal
I0731 08:06:47.900148    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0731 08:06:48.001045    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5821","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0731 08:06:48.141875    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0731 08:06:48.245756    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0731 08:06:48.441844    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-5821","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0731 08:06:49.059908    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-5821"},"Error":"","FullError":null}
STEP: Creating pod
Jul 31 08:06:55.084: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0731 08:06:55.308445    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0731 08:06:55.415644    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e"}}},"Error":"","FullError":null}
I0731 08:06:57.494922    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 31 08:06:57.597: INFO: >>> kubeConfig: /root/.kube/config
I0731 08:06:58.408015    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e","storage.kubernetes.io/csiProvisionerIdentity":"1627718808296-8081-csi-mock-csi-mock-volumes-5821"}},"Response":{},"Error":"","FullError":null}
I0731 08:06:58.754475    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 31 08:06:58.859: INFO: >>> kubeConfig: /root/.kube/config
Jul 31 08:06:59.608: INFO: >>> kubeConfig: /root/.kube/config
Jul 31 08:07:00.357: INFO: >>> kubeConfig: /root/.kube/config
I0731 08:07:01.126450    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e/globalmount","target_path":"/var/lib/kubelet/pods/f68958e0-6ec4-4cf3-b260-52587f599b96/volumes/kubernetes.io~csi/pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e","storage.kubernetes.io/csiProvisionerIdentity":"1627718808296-8081-csi-mock-csi-mock-volumes-5821"}},"Response":{},"Error":"","FullError":null}
Jul 31 08:07:07.494: INFO: Deleting pod "pvc-volume-tester-rwqdw" in namespace "csi-mock-volumes-5821"
Jul 31 08:07:07.598: INFO: Wait up to 5m0s for pod "pvc-volume-tester-rwqdw" to be fully deleted
Jul 31 08:07:09.942: INFO: >>> kubeConfig: /root/.kube/config
I0731 08:07:10.674452    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/f68958e0-6ec4-4cf3-b260-52587f599b96/volumes/kubernetes.io~csi/pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e/mount"},"Response":{},"Error":"","FullError":null}
I0731 08:07:10.850868    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0731 08:07:10.953776    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e/globalmount"},"Response":{},"Error":"","FullError":null}
I0731 08:07:13.920850    4863 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul 31 08:07:14.908: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-q7xb2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5821", SelfLink:"", UID:"064d8b47-c6f6-4116-bb9a-e52b9f17c72e", ResourceVersion:"8226", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315615, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002001e90), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002001ea8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0027200b0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0027200c0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:07:14.909: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-q7xb2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5821", SelfLink:"", UID:"064d8b47-c6f6-4116-bb9a-e52b9f17c72e", ResourceVersion:"8228", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315615, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-61-108.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00181bf08), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00181bf38)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00181bf50), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00181bf68)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002358870), VolumeMode:(*v1.PersistentVolumeMode)(0xc002358880), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:07:14.909: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-q7xb2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5821", SelfLink:"", UID:"064d8b47-c6f6-4116-bb9a-e52b9f17c72e", ResourceVersion:"8229", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315615, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5821", "volume.kubernetes.io/selected-node":"ip-172-20-61-108.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0022fa648), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0022fa660)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0022fa678), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0022fa690)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0022fa6a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0022fa6c0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0027207e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0027207f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:07:14.909: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-q7xb2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5821", SelfLink:"", UID:"064d8b47-c6f6-4116-bb9a-e52b9f17c72e", ResourceVersion:"8235", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315615, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5821", "volume.kubernetes.io/selected-node":"ip-172-20-61-108.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002732018), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002732030)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002732048), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002732060)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002732078), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002732090)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e", StorageClassName:(*string)(0xc002b0e020), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b0e030), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 31 08:07:14.909: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-q7xb2", GenerateName:"pvc-", Namespace:"csi-mock-volumes-5821", SelfLink:"", UID:"064d8b47-c6f6-4116-bb9a-e52b9f17c72e", ResourceVersion:"8237", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63763315615, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-5821", "volume.kubernetes.io/selected-node":"ip-172-20-61-108.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0027320d8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0027320f0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002732120), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002732138)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002732168), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002732180)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-064d8b47-c6f6-4116-bb9a-e52b9f17c72e", StorageClassName:(*string)(0xc002b0e060), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b0e070), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":5,"skipped":28,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:09.960: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
• [SLOW TEST:8.018 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":9,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:10.132: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
Jul 31 08:07:59.133: INFO: PersistentVolumeClaim pvc-gh5td found but phase is Pending instead of Bound.
Jul 31 08:08:01.237: INFO: PersistentVolumeClaim pvc-gh5td found and phase=Bound (14.84844315s)
Jul 31 08:08:01.237: INFO: Waiting up to 3m0s for PersistentVolume local-pj2ql to have phase Bound
Jul 31 08:08:01.339: INFO: PersistentVolume local-pj2ql found and phase=Bound (102.615925ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gb7b
STEP: Creating a pod to test subpath
Jul 31 08:08:01.649: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gb7b" in namespace "provisioning-9667" to be "Succeeded or Failed"
Jul 31 08:08:01.752: INFO: Pod "pod-subpath-test-preprovisionedpv-gb7b": Phase="Pending", Reason="", readiness=false. Elapsed: 102.654274ms
Jul 31 08:08:03.856: INFO: Pod "pod-subpath-test-preprovisionedpv-gb7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206930176s
Jul 31 08:08:05.959: INFO: Pod "pod-subpath-test-preprovisionedpv-gb7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309943663s
STEP: Saw pod success
Jul 31 08:08:05.960: INFO: Pod "pod-subpath-test-preprovisionedpv-gb7b" satisfied condition "Succeeded or Failed"
Jul 31 08:08:06.062: INFO: Trying to get logs from node ip-172-20-51-93.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-gb7b container test-container-subpath-preprovisionedpv-gb7b: <nil>
STEP: delete the pod
Jul 31 08:08:06.273: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gb7b to disappear
Jul 31 08:08:06.376: INFO: Pod pod-subpath-test-preprovisionedpv-gb7b no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gb7b
Jul 31 08:08:06.376: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gb7b" in namespace "provisioning-9667"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":44,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:10.206: INFO: Only supported for providers [gce gke] (not aws)
... skipping 48 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 31 08:08:10.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9226fc69-f54f-489b-8d0a-e9203160f49d" in namespace "projected-1189" to be "Succeeded or Failed"
Jul 31 08:08:10.990: INFO: Pod "downwardapi-volume-9226fc69-f54f-489b-8d0a-e9203160f49d": Phase="Pending", Reason="", readiness=false. Elapsed: 103.35441ms
Jul 31 08:08:13.093: INFO: Pod "downwardapi-volume-9226fc69-f54f-489b-8d0a-e9203160f49d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206329104s
STEP: Saw pod success
Jul 31 08:08:13.093: INFO: Pod "downwardapi-volume-9226fc69-f54f-489b-8d0a-e9203160f49d" satisfied condition "Succeeded or Failed"
Jul 31 08:08:13.196: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod downwardapi-volume-9226fc69-f54f-489b-8d0a-e9203160f49d container client-container: <nil>
STEP: delete the pod
Jul 31 08:08:13.419: INFO: Waiting for pod downwardapi-volume-9226fc69-f54f-489b-8d0a-e9203160f49d to disappear
Jul 31 08:08:13.522: INFO: Pod downwardapi-volume-9226fc69-f54f-489b-8d0a-e9203160f49d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:08:13.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1189" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":57,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 67 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":7,"skipped":30,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:26.217: INFO: >>> kubeConfig: /root/.kube/config
... skipping 39 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":8,"skipped":30,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:19.254: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 117 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":4,"skipped":29,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]"]}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:08:25.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:08:26.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1711" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":5,"skipped":29,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 111 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-0dddda19-1247-450f-b1aa-24f395a0488a
STEP: Creating a pod to test consume configMaps
Jul 31 08:08:19.981: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad" in namespace "projected-572" to be "Succeeded or Failed"
Jul 31 08:08:20.082: INFO: Pod "pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad": Phase="Pending", Reason="", readiness=false. Elapsed: 101.098892ms
Jul 31 08:08:22.184: INFO: Pod "pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20272429s
Jul 31 08:08:24.287: INFO: Pod "pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305563732s
Jul 31 08:08:26.388: INFO: Pod "pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.407215972s
Jul 31 08:08:28.490: INFO: Pod "pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.509417822s
STEP: Saw pod success
Jul 31 08:08:28.490: INFO: Pod "pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad" satisfied condition "Succeeded or Failed"
Jul 31 08:08:28.592: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:08:28.802: INFO: Waiting for pod pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad to disappear
Jul 31 08:08:28.903: INFO: Pod pod-projected-configmaps-32529791-ecd6-4ce6-a71a-8fe99af223ad no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.840 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":31,"failed":1,"failures":["[sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:29.129: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 135 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":13,"skipped":73,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:31.436: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 104 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":13,"skipped":115,"failed":0}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:53.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
Jul 31 08:07:58.053: INFO: Unable to read jessie_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:07:58.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:07:58.260: INFO: Unable to read jessie_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:07:58.362: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:07:58.463: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:07:58.565: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:07:59.179: INFO: Lookups using dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2241 wheezy_tcp@dns-test-service.dns-2241 wheezy_udp@dns-test-service.dns-2241.svc wheezy_tcp@dns-test-service.dns-2241.svc wheezy_udp@_http._tcp.dns-test-service.dns-2241.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2241.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2241 jessie_tcp@dns-test-service.dns-2241 jessie_udp@dns-test-service.dns-2241.svc jessie_tcp@dns-test-service.dns-2241.svc jessie_udp@_http._tcp.dns-test-service.dns-2241.svc jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc]

Jul 31 08:08:04.284: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:04.385: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:04.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:04.589: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:04.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
... skipping 5 lines ...
Jul 31 08:08:05.917: INFO: Unable to read jessie_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:06.018: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:06.124: INFO: Unable to read jessie_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:06.225: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:06.327: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:06.428: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:07.038: INFO: Lookups using dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2241 wheezy_tcp@dns-test-service.dns-2241 wheezy_udp@dns-test-service.dns-2241.svc wheezy_tcp@dns-test-service.dns-2241.svc wheezy_udp@_http._tcp.dns-test-service.dns-2241.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2241.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2241 jessie_tcp@dns-test-service.dns-2241 jessie_udp@dns-test-service.dns-2241.svc jessie_tcp@dns-test-service.dns-2241.svc jessie_udp@_http._tcp.dns-test-service.dns-2241.svc jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc]

Jul 31 08:08:09.288: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:09.395: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:09.496: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:09.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:09.699: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
... skipping 5 lines ...
Jul 31 08:08:10.924: INFO: Unable to read jessie_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:11.031: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:11.133: INFO: Unable to read jessie_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:11.234: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:11.336: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:11.437: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:12.057: INFO: Lookups using dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2241 wheezy_tcp@dns-test-service.dns-2241 wheezy_udp@dns-test-service.dns-2241.svc wheezy_tcp@dns-test-service.dns-2241.svc wheezy_udp@_http._tcp.dns-test-service.dns-2241.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2241.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2241 jessie_tcp@dns-test-service.dns-2241 jessie_udp@dns-test-service.dns-2241.svc jessie_tcp@dns-test-service.dns-2241.svc jessie_udp@_http._tcp.dns-test-service.dns-2241.svc jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc]

Jul 31 08:08:14.284: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:14.385: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:14.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:14.588: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:14.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
... skipping 5 lines ...
Jul 31 08:08:15.987: INFO: Unable to read jessie_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:16.089: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:16.193: INFO: Unable to read jessie_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:16.294: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:16.396: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:16.498: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:17.112: INFO: Lookups using dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2241 wheezy_tcp@dns-test-service.dns-2241 wheezy_udp@dns-test-service.dns-2241.svc wheezy_tcp@dns-test-service.dns-2241.svc wheezy_udp@_http._tcp.dns-test-service.dns-2241.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2241.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2241 jessie_tcp@dns-test-service.dns-2241 jessie_udp@dns-test-service.dns-2241.svc jessie_tcp@dns-test-service.dns-2241.svc jessie_udp@_http._tcp.dns-test-service.dns-2241.svc jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc]

Jul 31 08:08:19.282: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:19.387: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:19.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:19.589: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:19.696: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
... skipping 5 lines ...
Jul 31 08:08:20.932: INFO: Unable to read jessie_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:21.033: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:21.135: INFO: Unable to read jessie_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:21.237: INFO: Unable to read jessie_tcp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:21.339: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:21.441: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:22.061: INFO: Lookups using dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2241 wheezy_tcp@dns-test-service.dns-2241 wheezy_udp@dns-test-service.dns-2241.svc wheezy_tcp@dns-test-service.dns-2241.svc wheezy_udp@_http._tcp.dns-test-service.dns-2241.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2241.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2241 jessie_tcp@dns-test-service.dns-2241 jessie_udp@dns-test-service.dns-2241.svc jessie_tcp@dns-test-service.dns-2241.svc jessie_udp@_http._tcp.dns-test-service.dns-2241.svc jessie_tcp@_http._tcp.dns-test-service.dns-2241.svc]

Jul 31 08:08:24.283: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:24.388: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:24.490: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:24.591: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2241 from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:24.698: INFO: Unable to read wheezy_udp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:24.800: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:24.903: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:25.004: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2241.svc from pod dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e: the server could not find the requested resource (get pods dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e)
Jul 31 08:08:27.119: INFO: Lookups using dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2241 wheezy_tcp@dns-test-service.dns-2241 wheezy_udp@dns-test-service.dns-2241.svc wheezy_tcp@dns-test-service.dns-2241.svc wheezy_udp@_http._tcp.dns-test-service.dns-2241.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2241.svc]

Jul 31 08:08:32.150: INFO: DNS probes using dns-2241/dns-test-03820865-24bf-4ccb-8cff-b7b051664c3e succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:39.502 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":115,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:32.717: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 41 lines ...
• [SLOW TEST:29.557 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":9,"skipped":65,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:115.647 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not emit unexpected warnings
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:221
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not emit unexpected warnings","total":-1,"completed":6,"skipped":86,"failed":1,"failures":["[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:34.355: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:08:27.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 27 lines ...
• [SLOW TEST:6.980 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":6,"skipped":30,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:34.833: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:08:34.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3982" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":97,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:35.061: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 62 lines ...
• [SLOW TEST:5.820 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":10,"skipped":68,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:39.413: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":8,"skipped":59,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}

S
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":84,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:33.123: INFO: >>> kubeConfig: /root/.kube/config
... skipping 47 lines ...
Jul 31 08:07:37.806: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 31 08:07:37.910: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathvgl6j] to have phase Bound
Jul 31 08:07:38.011: INFO: PersistentVolumeClaim csi-hostpathvgl6j found but phase is Pending instead of Bound.
Jul 31 08:07:40.115: INFO: PersistentVolumeClaim csi-hostpathvgl6j found and phase=Bound (2.204738376s)
STEP: Creating pod pod-subpath-test-dynamicpv-txb9
STEP: Creating a pod to test atomic-volume-subpath
Jul 31 08:07:40.419: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-txb9" in namespace "provisioning-6691" to be "Succeeded or Failed"
Jul 31 08:07:40.521: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Pending", Reason="", readiness=false. Elapsed: 101.296247ms
Jul 31 08:07:42.624: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204699791s
Jul 31 08:07:44.727: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307912278s
Jul 31 08:07:46.830: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4102702s
Jul 31 08:07:48.932: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.512216295s
Jul 31 08:07:51.034: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61441759s
... skipping 6 lines ...
Jul 31 08:08:05.757: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Running", Reason="", readiness=true. Elapsed: 25.337532112s
Jul 31 08:08:07.859: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Running", Reason="", readiness=true. Elapsed: 27.439429617s
Jul 31 08:08:09.961: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Running", Reason="", readiness=true. Elapsed: 29.541318595s
Jul 31 08:08:12.063: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Running", Reason="", readiness=true. Elapsed: 31.643512107s
Jul 31 08:08:14.164: INFO: Pod "pod-subpath-test-dynamicpv-txb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.745012142s
STEP: Saw pod success
Jul 31 08:08:14.165: INFO: Pod "pod-subpath-test-dynamicpv-txb9" satisfied condition "Succeeded or Failed"
Jul 31 08:08:14.266: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-txb9 container test-container-subpath-dynamicpv-txb9: <nil>
STEP: delete the pod
Jul 31 08:08:14.477: INFO: Waiting for pod pod-subpath-test-dynamicpv-txb9 to disappear
Jul 31 08:08:14.578: INFO: Pod pod-subpath-test-dynamicpv-txb9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-txb9
Jul 31 08:08:14.578: INFO: Deleting pod "pod-subpath-test-dynamicpv-txb9" in namespace "provisioning-6691"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":10,"skipped":84,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-275
STEP: Creating statefulset with conflicting port in namespace statefulset-275
STEP: Waiting until pod test-pod will start running in namespace statefulset-275
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-275
Jul 31 08:08:15.214: INFO: Observed stateful pod in namespace: statefulset-275, name: ss-0, uid: 13662efe-79ee-4088-9806-1fc0d32cf591, status phase: Pending. Waiting for statefulset controller to delete.
Jul 31 08:08:15.612: INFO: Observed stateful pod in namespace: statefulset-275, name: ss-0, uid: 13662efe-79ee-4088-9806-1fc0d32cf591, status phase: Failed. Waiting for statefulset controller to delete.
Jul 31 08:08:15.617: INFO: Observed stateful pod in namespace: statefulset-275, name: ss-0, uid: 13662efe-79ee-4088-9806-1fc0d32cf591, status phase: Failed. Waiting for statefulset controller to delete.
Jul 31 08:08:15.620: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-275
STEP: Removing pod with conflicting port in namespace statefulset-275
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-275 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116
Jul 31 08:08:20.039: INFO: Deleting all statefulset in ns statefulset-275
... skipping 43 lines ...
• [SLOW TEST:7.335 seconds]
[sig-node] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":7,"skipped":89,"failed":1,"failures":["[sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1"]}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:41.783: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":65,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:15.503: INFO: >>> kubeConfig: /root/.kube/config
... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:161
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":10,"skipped":65,"failed":1,"failures":["[sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:41.909: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 64 lines ...
• [SLOW TEST:32.726 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":15,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:42.565: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 165 lines ...
Jul 31 08:07:44.765: INFO: Deleting ReplicationController up-down-1 took: 103.384887ms
Jul 31 08:07:44.867: INFO: Terminating ReplicationController up-down-1 pods took: 101.309219ms
STEP: verifying service up-down-1 is not up
Jul 31 08:07:55.880: INFO: Creating new host exec pod
Jul 31 08:07:56.085: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Jul 31 08:07:58.187: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Jul 31 08:07:58.187: INFO: Running '/tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1777 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.160.20:80 && echo service-down-failed'
Jul 31 08:08:01.354: INFO: rc: 28
Jul 31 08:08:01.355: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://100.65.160.20:80 && echo service-down-failed" in pod services-1777/verify-service-down-host-exec-pod: error running /tmp/kubectl880441627/kubectl --server=https://api.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-1777 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://100.65.160.20:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 http://100.65.160.20:80
command terminated with exit code 28

error:
exit status 28
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-1777
STEP: verifying service up-down-2 is still up
Jul 31 08:08:01.461: INFO: Creating new host exec pod
Jul 31 08:08:01.664: INFO: The status of Pod verify-service-up-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
... skipping 65 lines ...
• [SLOW TEST:104.735 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1015
------------------------------
{"msg":"PASSED [sig-network] Services should be able to up and down services","total":-1,"completed":7,"skipped":48,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:44.014: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:08:44.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4318" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":85,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:45.166: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 43 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-df758cb9-f5a1-45d8-a7a7-532fec443868
STEP: Creating a pod to test consume configMaps
Jul 31 08:08:40.604: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023" in namespace "configmap-5157" to be "Succeeded or Failed"
Jul 31 08:08:40.706: INFO: Pod "pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023": Phase="Pending", Reason="", readiness=false. Elapsed: 102.025908ms
Jul 31 08:08:42.809: INFO: Pod "pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205095112s
Jul 31 08:08:44.912: INFO: Pod "pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308491386s
STEP: Saw pod success
Jul 31 08:08:44.912: INFO: Pod "pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023" satisfied condition "Succeeded or Failed"
Jul 31 08:08:45.019: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023 container agnhost-container: <nil>
STEP: delete the pod
Jul 31 08:08:45.252: INFO: Waiting for pod pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023 to disappear
Jul 31 08:08:45.354: INFO: Pod pod-configmaps-8a4c5cba-3c24-4dcb-8721-e450172ee023 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.685 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":60,"failed":1,"failures":["[sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:45.608: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 37 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":11,"skipped":100,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 31 08:07:44.157: INFO: >>> kubeConfig: /root/.kube/config
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":12,"skipped":100,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:47.393: INFO: Driver "local" does not provide raw block - skipping
... skipping 79 lines ...
Jul 31 08:08:42.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 31 08:08:43.255: INFO: Waiting up to 5m0s for pod "pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f" in namespace "emptydir-7214" to be "Succeeded or Failed"
Jul 31 08:08:43.360: INFO: Pod "pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 104.343059ms
Jul 31 08:08:45.464: INFO: Pod "pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208739646s
Jul 31 08:08:47.568: INFO: Pod "pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312232871s
STEP: Saw pod success
Jul 31 08:08:47.568: INFO: Pod "pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f" satisfied condition "Succeeded or Failed"
Jul 31 08:08:47.672: INFO: Trying to get logs from node ip-172-20-58-77.eu-west-2.compute.internal pod pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f container test-container: <nil>
STEP: delete the pod
Jul 31 08:08:47.886: INFO: Waiting for pod pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f to disappear
Jul 31 08:08:47.992: INFO: Pod pod-ed4c355b-8b26-43e0-94c5-91321ac5d27f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 25 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":101,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:48.223: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:08:49.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2557" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":17,"skipped":102,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 50 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":15,"skipped":106,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":11,"skipped":69,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:50.179: INFO: Only supported for providers [azure] (not aws)
... skipping 189 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 31 08:08:50.815: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3" in namespace "downward-api-1513" to be "Succeeded or Failed"
Jul 31 08:08:50.917: INFO: Pod "downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 101.986284ms
Jul 31 08:08:53.022: INFO: Pod "downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206702695s
Jul 31 08:08:55.125: INFO: Pod "downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309308773s
Jul 31 08:08:57.227: INFO: Pod "downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.41152232s
STEP: Saw pod success
Jul 31 08:08:57.227: INFO: Pod "downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3" satisfied condition "Succeeded or Failed"
Jul 31 08:08:57.329: INFO: Trying to get logs from node ip-172-20-61-108.eu-west-2.compute.internal pod downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3 container client-container: <nil>
STEP: delete the pod
Jul 31 08:08:57.545: INFO: Waiting for pod downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3 to disappear
Jul 31 08:08:57.647: INFO: Pod downwardapi-volume-9c73b8b5-5667-40d0-b8b2-041d08bf54e3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.658 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":110,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:57.866: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 222 lines ...
Jul 31 08:08:04.613: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-gz4gj] to have phase Bound
Jul 31 08:08:04.715: INFO: PersistentVolumeClaim pvc-gz4gj found and phase=Bound (102.139856ms)
STEP: Deleting the previously created pod
Jul 31 08:08:23.228: INFO: Deleting pod "pvc-volume-tester-gr5r6" in namespace "csi-mock-volumes-8923"
Jul 31 08:08:23.330: INFO: Wait up to 5m0s for pod "pvc-volume-tester-gr5r6" to be fully deleted
STEP: Checking CSI driver logs
Jul 31 08:08:27.641: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/599dab30-7cf6-4840-8fb0-a4c1c6097e79/volumes/kubernetes.io~csi/pvc-d74e8d3b-91b5-4a72-9a94-f7f74d45cd6e/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-gr5r6
Jul 31 08:08:27.641: INFO: Deleting pod "pvc-volume-tester-gr5r6" in namespace "csi-mock-volumes-8923"
STEP: Deleting claim pvc-gz4gj
Jul 31 08:08:27.949: INFO: Waiting up to 2m0s for PersistentVolume pvc-d74e8d3b-91b5-4a72-9a94-f7f74d45cd6e to get deleted
Jul 31 08:08:28.051: INFO: PersistentVolume pvc-d74e8d3b-91b5-4a72-9a94-f7f74d45cd6e found and phase=Released (101.88836ms)
Jul 31 08:08:30.154: INFO: PersistentVolume pvc-d74e8d3b-91b5-4a72-9a94-f7f74d45cd6e found and phase=Released (2.205176897s)
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":12,"skipped":102,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 31 08:08:59.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2954" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":13,"skipped":109,"failed":1,"failures":["[sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 31 08:08:59.653: INFO: Only supported for providers [gce gke] (not aws)
... skipping 45067 lines ...






n=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-dgw2f\"\nI0731 08:18:45.164116       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-0ee1b520-d4a2-4314-85af-a5438ac0afd7\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-6170^4\") on node \"ip-172-20-51-93.eu-west-2.compute.internal\" \nI0731 08:18:45.169312       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-qng7k\"\nI0731 08:18:45.172341       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-dcpsn\"\nI0731 08:18:45.172475       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-72kt8\"\nI0731 08:18:45.180064       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-82kt7\"\nI0731 08:18:45.191183       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-fn57z\"\nI0731 08:18:45.200224       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-9sjc9\"\nI0731 08:18:45.201595       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-9gvsd\"\nI0731 08:18:45.511957       1 pv_controller.go:930] claim \"provisioning-6562/pvc-7sr86\" bound to volume \"local-bc89k\"\nI0731 08:18:45.516694       1 pv_controller.go:1341] isVolumeReleased[pvc-2e4c7214-f0a2-4052-a350-42522102087a]: volume is released\nI0731 08:18:45.518264       1 pv_controller.go:1341] isVolumeReleased[pvc-c204d514-ce1d-4a3f-83ce-38382113f9d9]: volume is released\nI0731 08:18:45.520640       1 pv_controller.go:879] volume \"local-bc89k\" entered phase \"Bound\"\nI0731 08:18:45.520691       1 pv_controller.go:982] volume \"local-bc89k\" bound to claim \"provisioning-6562/pvc-7sr86\"\nI0731 08:18:45.528981       1 pv_controller.go:823] claim \"provisioning-6562/pvc-7sr86\" entered phase \"Bound\"\nI0731 08:18:45.529341       1 pv_controller.go:930] claim \"provisioning-5462/pvc-q4qg5\" bound to volume \"local-jdhnb\"\nI0731 08:18:45.530133       1 route_controller.go:294] set node ip-172-20-60-242.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:45.530156       1 route_controller.go:294] set node ip-172-20-51-93.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:45.530167       1 route_controller.go:294] set node ip-172-20-61-108.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:45.530181       1 route_controller.go:294] set node ip-172-20-54-176.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:45.530196       1 route_controller.go:294] set node ip-172-20-58-77.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:45.535811       1 pv_controller.go:879] volume \"local-jdhnb\" entered phase \"Bound\"\nI0731 08:18:45.535837       1 pv_controller.go:982] volume \"local-jdhnb\" bound to claim \"provisioning-5462/pvc-q4qg5\"\nI0731 08:18:45.542794       1 pv_controller.go:823] claim \"provisioning-5462/pvc-q4qg5\" entered phase \"Bound\"\nI0731 08:18:45.543099       1 pv_controller.go:930] claim \"volume-7501/pvc-7pf6x\" bound to volume \"aws-wbnsj\"\nI0731 08:18:45.548814       1 pv_controller.go:879] volume \"aws-wbnsj\" entered phase \"Bound\"\nI0731 08:18:45.548836       1 pv_controller.go:982] volume \"aws-wbnsj\" bound to claim \"volume-7501/pvc-7pf6x\"\nI0731 08:18:45.556989       1 pv_controller.go:823] claim \"volume-7501/pvc-7pf6x\" entered phase \"Bound\"\nI0731 08:18:45.691174       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-2e4c7214-f0a2-4052-a350-42522102087a\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0996b1aa6b74d16ee\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:18:45.693225       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-2e4c7214-f0a2-4052-a350-42522102087a\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0996b1aa6b74d16ee\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:18:45.726332       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://eu-west-2a/vol-01927fc5e082001da\nI0731 08:18:45.726378       1 pv_controller.go:1436] volume \"pvc-c204d514-ce1d-4a3f-83ce-38382113f9d9\" deleted\nI0731 08:18:45.732879       1 pv_controller_base.go:505] deletion of claim \"provisioning-4850/awsjxsjv\" was already processed\nI0731 08:18:45.737970       1 aws_util.go:62] Error deleting EBS Disk volume aws://eu-west-2a/vol-0996b1aa6b74d16ee: error deleting EBS volume \"vol-0996b1aa6b74d16ee\" since volume is currently attached to \"i-0eccb4b5dfe1d0b8e\"\nE0731 08:18:45.738028       1 goroutinemap.go:150] Operation for \"delete-pvc-2e4c7214-f0a2-4052-a350-42522102087a[d2b56a98-99be-44fc-8bc3-4cead0ae75cd]\" failed. No retries permitted until 2021-07-31 08:18:46.738008716 +0000 UTC m=+846.556092025 (durationBeforeRetry 1s). Error: \"error deleting EBS volume \\\"vol-0996b1aa6b74d16ee\\\" since volume is currently attached to \\\"i-0eccb4b5dfe1d0b8e\\\"\"\nI0731 08:18:45.738073       1 event.go:291] \"Event occurred\" object=\"pvc-2e4c7214-f0a2-4052-a350-42522102087a\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0996b1aa6b74d16ee\\\" since volume is currently attached to \\\"i-0eccb4b5dfe1d0b8e\\\"\"\nI0731 08:18:45.811270       1 namespace_controller.go:185] Namespace has been deleted services-4701\nI0731 08:18:46.103247       1 aws.go:2037] Releasing in-process attachment entry: bw -> volume vol-08c1a577477ed26b0\nI0731 08:18:46.103298       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") from node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:18:46.103466       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2750/pod-f9b8184a-92a7-4f67-b5d8-2e224a10345f\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\\\" \"\nI0731 08:18:46.151578       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-9845-1185/csi-mockplugin-597bf68d57\" objectUID=c4e54129-10c4-4e70-a94a-4e6d39027bdb kind=\"ControllerRevision\" virtual=false\nI0731 08:18:46.151792       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-9845-1185/csi-mockplugin\nI0731 08:18:46.151840       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-9845-1185/csi-mockplugin-0\" objectUID=f4a64aa2-096a-45a6-8f19-8cbcff26aa71 kind=\"Pod\" virtual=false\nI0731 08:18:46.154927       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-9845-1185/csi-mockplugin-597bf68d57\" objectUID=c4e54129-10c4-4e70-a94a-4e6d39027bdb kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:18:46.156253       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-9845-1185/csi-mockplugin-0\" objectUID=f4a64aa2-096a-45a6-8f19-8cbcff26aa71 kind=\"Pod\" propagationPolicy=Background\nI0731 08:18:46.408255       1 namespace_controller.go:185] Namespace has been deleted provisioning-8507\nI0731 08:18:46.635208       1 namespace_controller.go:185] Namespace has been deleted dns-autoscaling-82\nI0731 08:18:46.642420       1 namespace_controller.go:185] Namespace has been deleted watch-442\nI0731 08:18:46.917138       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-3bcc9a60-8738-4250-a685-aa553165f889\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0adab3873782cc009\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:18:46.917209       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-wbnsj\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-00dcc4c13455182fd\") from node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:18:46.919074       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-3bcc9a60-8738-4250-a685-aa553165f889\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0adab3873782cc009\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:18:46.987991       1 aws.go:2014] Assigned mount device cv -> volume vol-00dcc4c13455182fd\nI0731 08:18:47.215290       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9845\nI0731 08:18:47.274984       1 namespace_controller.go:185] Namespace has been deleted server-version-5602\nI0731 08:18:47.325619       1 aws.go:2427] AttachVolume volume=\"vol-00dcc4c13455182fd\" instance=\"i-01d69da9e39710e15\" request returned {\n  AttachTime: 2021-07-31 08:18:47.318 +0000 UTC,\n  Device: \"/dev/xvdcv\",\n  InstanceId: \"i-01d69da9e39710e15\",\n  State: \"attaching\",\n  VolumeId: \"vol-00dcc4c13455182fd\"\n}\nI0731 08:18:49.442352       1 aws.go:2037] Releasing in-process attachment entry: cv -> volume vol-00dcc4c13455182fd\nI0731 08:18:49.442403       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"aws-wbnsj\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-00dcc4c13455182fd\") from node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:18:49.442661       1 event.go:291] \"Event occurred\" object=\"volume-7501/aws-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-wbnsj\\\" \"\nI0731 08:18:49.579232       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-839/pvc-cqxzt\"\nI0731 08:18:49.586281       1 pv_controller.go:640] volume \"local-pvrsbdb\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:18:49.590758       1 pv_controller.go:879] volume \"local-pvrsbdb\" entered phase \"Released\"\nE0731 08:18:49.623011       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-839/default: secrets \"default-token-v98g6\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-839 because it is being terminated\nI0731 08:18:50.261153       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-1199/awsgjrqw\"\nI0731 08:18:50.266457       1 pv_controller.go:640] volume \"pvc-3bcc9a60-8738-4250-a685-aa553165f889\" is released and reclaim policy \"Delete\" will be executed\nI0731 08:18:50.268928       1 pv_controller.go:879] volume \"pvc-3bcc9a60-8738-4250-a685-aa553165f889\" entered phase \"Released\"\nI0731 08:18:50.270970       1 pv_controller.go:1341] isVolumeReleased[pvc-3bcc9a60-8738-4250-a685-aa553165f889]: volume is released\nI0731 08:18:50.442572       1 aws_util.go:62] Error deleting EBS Disk volume aws://eu-west-2a/vol-0adab3873782cc009: error deleting EBS volume \"vol-0adab3873782cc009\" since volume is currently attached to \"i-0eccb4b5dfe1d0b8e\"\nE0731 08:18:50.442641       1 goroutinemap.go:150] Operation for \"delete-pvc-3bcc9a60-8738-4250-a685-aa553165f889[901eb77a-fc12-4d25-a76f-1f83458ff09f]\" failed. No retries permitted until 2021-07-31 08:18:50.942621647 +0000 UTC m=+850.760704954 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-0adab3873782cc009\\\" since volume is currently attached to \\\"i-0eccb4b5dfe1d0b8e\\\"\"\nI0731 08:18:50.442675       1 event.go:291] \"Event occurred\" object=\"pvc-3bcc9a60-8738-4250-a685-aa553165f889\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-0adab3873782cc009\\\" since volume is currently attached to \\\"i-0eccb4b5dfe1d0b8e\\\"\"\nI0731 08:18:51.155164       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-3a1d861b-2d8d-4d87-9da7-d786e0526c2e\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-04497c3e313fc8f97\") on node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:18:51.158584       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-3a1d861b-2d8d-4d87-9da7-d786e0526c2e\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-04497c3e313fc8f97\") on node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:18:51.227127       1 aws.go:2291] Waiting for volume \"vol-0996b1aa6b74d16ee\" state: actual=detaching, desired=detached\nI0731 08:18:51.849307       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"csi-mock-volumes-6170/pvc-84zz8\"\nI0731 08:18:51.855476       1 pv_controller.go:640] volume \"pvc-0ee1b520-d4a2-4314-85af-a5438ac0afd7\" is released and reclaim policy \"Delete\" will be executed\nI0731 08:18:51.858500       1 pv_controller.go:879] volume \"pvc-0ee1b520-d4a2-4314-85af-a5438ac0afd7\" entered phase \"Released\"\nI0731 08:18:51.859836       1 pv_controller.go:1341] isVolumeReleased[pvc-0ee1b520-d4a2-4314-85af-a5438ac0afd7]: volume is released\nI0731 08:18:51.881681       1 pv_controller_base.go:505] deletion of claim \"csi-mock-volumes-6170/pvc-84zz8\" was already processed\nI0731 08:18:52.391302       1 aws.go:2291] Waiting for volume \"vol-0adab3873782cc009\" state: actual=detaching, desired=detached\nE0731 08:18:52.876031       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-7980/default: secrets \"default-token-fzdhg\" is forbidden: unable to create new content in namespace nettest-7980 because it is being terminated\nI0731 08:18:53.288578       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-07-31 08:18:14 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbb\",\n  InstanceId: \"i-0eccb4b5dfe1d0b8e\",\n  State: \"detaching\",\n  VolumeId: \"vol-0996b1aa6b74d16ee\"\n}\nI0731 08:18:53.288627       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-2e4c7214-f0a2-4052-a350-42522102087a\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0996b1aa6b74d16ee\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:18:53.474911       1 pv_controller.go:879] volume \"nfs-j4phz\" entered phase \"Available\"\nI0731 08:18:53.566609       1 pv_controller.go:930] claim \"pv-7871/pvc-6cvlt\" bound to volume \"nfs-j4phz\"\nI0731 08:18:53.573163       1 pv_controller.go:879] volume \"nfs-j4phz\" entered phase \"Bound\"\nI0731 08:18:53.573191       1 pv_controller.go:982] volume \"nfs-j4phz\" bound to claim \"pv-7871/pvc-6cvlt\"\nI0731 08:18:53.577705       1 pv_controller.go:823] claim \"pv-7871/pvc-6cvlt\" entered phase \"Bound\"\nE0731 08:18:53.598181       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:18:53.672663       1 pv_controller.go:879] volume \"nfs-6fjcb\" entered phase \"Available\"\nI0731 08:18:53.771991       1 pv_controller.go:930] claim \"pv-7871/pvc-q6kwm\" bound to volume \"nfs-6fjcb\"\nI0731 08:18:53.778070       1 pv_controller.go:879] volume \"nfs-6fjcb\" entered phase \"Bound\"\nI0731 08:18:53.778202       1 pv_controller.go:982] volume \"nfs-6fjcb\" bound to claim \"pv-7871/pvc-q6kwm\"\nI0731 08:18:53.782720       1 pv_controller.go:823] claim \"pv-7871/pvc-q6kwm\" entered phase \"Bound\"\nI0731 08:18:53.877548       1 pv_controller.go:879] volume \"nfs-v98sx\" entered phase \"Available\"\nI0731 08:18:53.976893       1 pv_controller.go:930] claim \"pv-7871/pvc-kdc7r\" bound to volume \"nfs-v98sx\"\nI0731 08:18:53.983197       1 pv_controller.go:879] volume \"nfs-v98sx\" entered phase \"Bound\"\nI0731 08:18:53.983274       1 pv_controller.go:982] volume \"nfs-v98sx\" bound to claim \"pv-7871/pvc-kdc7r\"\nI0731 08:18:53.987806       1 pv_controller.go:823] claim \"pv-7871/pvc-kdc7r\" entered phase \"Bound\"\nI0731 08:18:54.464486       1 aws.go:2291] Waiting for volume \"vol-0adab3873782cc009\" state: actual=detaching, desired=detached\nI0731 08:18:55.097333       1 pv_controller.go:879] volume \"local-pvh6vxk\" entered phase \"Available\"\nI0731 08:18:55.197541       1 pv_controller.go:930] claim \"persistent-local-volumes-test-1379/pvc-z579x\" bound to volume \"local-pvh6vxk\"\nI0731 08:18:55.204861       1 pv_controller.go:879] volume \"local-pvh6vxk\" entered phase \"Bound\"\nI0731 08:18:55.204893       1 pv_controller.go:982] volume \"local-pvh6vxk\" bound to claim \"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:18:55.214593       1 pv_controller.go:823] claim \"persistent-local-volumes-test-1379/pvc-z579x\" entered phase \"Bound\"\nI0731 08:18:55.515771       1 route_controller.go:294] set node ip-172-20-61-108.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:55.515771       1 route_controller.go:294] set node ip-172-20-54-176.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:55.515798       1 route_controller.go:294] set node ip-172-20-58-77.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:55.515807       1 route_controller.go:294] set node ip-172-20-60-242.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:55.515867       1 route_controller.go:294] set node ip-172-20-51-93.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:18:55.964252       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-2970/webserver-deployment-795d758f88\" need=3 creating=3\nI0731 08:18:55.964430       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 3\"\nI0731 08:18:55.971372       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-qhcks\"\nI0731 08:18:55.972318       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-2970/webserver-deployment-847dcfb7fb\" need=8 deleting=2\nI0731 08:18:55.972348       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-2970/webserver-deployment-847dcfb7fb\" relatedReplicaSets=[webserver-deployment-847dcfb7fb webserver-deployment-795d758f88]\nI0731 08:18:55.972811       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-deployment-847dcfb7fb to 8\"\nI0731 08:18:55.974214       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-deployment-847dcfb7fb\" pod=\"deployment-2970/webserver-deployment-847dcfb7fb-fn57z\"\nI0731 08:18:55.974335       1 controller_utils.go:602] \"Deleting pod\" controller=\"webserver-deployment-847dcfb7fb\" pod=\"deployment-2970/webserver-deployment-847dcfb7fb-qng7k\"\nI0731 08:18:55.979111       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2970/webserver-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:18:55.981801       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-xvs5p\"\nI0731 08:18:55.984720       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-rxkv8\"\nI0731 08:18:55.991799       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 5\"\nI0731 08:18:55.998427       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-deployment-847dcfb7fb-qng7k\"\nI0731 08:18:56.008843       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-deployment-847dcfb7fb-fn57z\"\nI0731 08:18:56.012698       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-2970/webserver-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"webserver-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:18:56.022167       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-2970/webserver-deployment-795d758f88\" need=5 creating=2\nI0731 08:18:56.028388       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-7l7lm\"\nI0731 08:18:56.034977       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-dbr47\"\nI0731 08:18:56.727760       1 aws.go:2291] Waiting for volume \"vol-04497c3e313fc8f97\" state: actual=detaching, desired=detached\nE0731 08:18:56.798602       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:18:57.098623       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-2970/webserver-deployment-847dcfb7fb\" need=20 creating=12\nI0731 08:18:57.100219       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-847dcfb7fb to 20\"\nI0731 08:18:57.112084       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-2970/webserver-deployment-795d758f88\" need=13 creating=8\nI0731 08:18:57.112889       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-j5bfv\"\nI0731 08:18:57.120520       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-kns9m\"\nI0731 08:18:57.120881       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-c4x8n\"\nI0731 08:18:57.124549       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-deployment-795d758f88 to 13\"\nI0731 08:18:57.127312       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-wqhg8\"\nI0731 08:18:57.149365       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-qj7mf\"\nI0731 08:18:57.150781       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-r8l45\"\nI0731 08:18:57.152195       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-jvrp2\"\nI0731 08:18:57.154128       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-qzk9m\"\nI0731 08:18:57.154421       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-cqhg8\"\nI0731 08:18:57.165157       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-bhgwk\"\nI0731 08:18:57.181192       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-clrjm\"\nI0731 08:18:57.182752       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-g49cc\"\nI0731 08:18:57.182937       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-4nhqn\"\nI0731 08:18:57.183096       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-wm869\"\nI0731 08:18:57.183443       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-frtmt\"\nI0731 08:18:57.194683       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-68zr4\"\nI0731 08:18:57.195432       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-whkx6\"\nI0731 08:18:57.195667       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-6p24w\"\nI0731 08:18:57.196153       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-847dcfb7fb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-847dcfb7fb-qw4fq\"\nI0731 08:18:57.216854       1 event.go:291] \"Event occurred\" object=\"deployment-2970/webserver-deployment-795d758f88\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-deployment-795d758f88-ksjs6\"\nE0731 08:18:57.293494       1 tokens_controller.go:262] error synchronizing serviceaccount csi-mock-volumes-6170/default: secrets \"default-token-l95td\" is forbidden: unable to create new content in namespace csi-mock-volumes-6170 because it is being terminated\nI0731 08:18:58.530472       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-07-31 08:18:33 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbs\",\n  InstanceId: \"i-0eccb4b5dfe1d0b8e\",\n  State: \"detaching\",\n  VolumeId: \"vol-0adab3873782cc009\"\n}\nI0731 08:18:58.530525       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-3bcc9a60-8738-4250-a685-aa553165f889\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0adab3873782cc009\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:18:58.793778       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-07-31 08:18:00 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbi\",\n  InstanceId: \"i-01d69da9e39710e15\",\n  State: \"detaching\",\n  VolumeId: \"vol-04497c3e313fc8f97\"\n}\nI0731 08:18:58.793827       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-3a1d861b-2d8d-4d87-9da7-d786e0526c2e\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-04497c3e313fc8f97\") on node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:18:59.430678       1 event.go:291] \"Event occurred\" object=\"statefulset-1562/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-0 in StatefulSet ss2 successful\"\nI0731 08:18:59.748216       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-839\nI0731 08:18:59.836381       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"replication-controller-3930/pod-adoption\" need=1 creating=1\nE0731 08:18:59.851432       1 tokens_controller.go:262] error synchronizing serviceaccount replication-controller-3930/default: secrets \"default-token-247vn\" is forbidden: unable to create new content in namespace replication-controller-3930 because it is being terminated\nI0731 08:18:59.884115       1 garbagecollector.go:471] \"Processing object\" object=\"replication-controller-3930/pod-adoption\" objectUID=093a37ef-c202-420a-82c0-41366370b035 kind=\"Pod\" virtual=false\nE0731 08:19:00.250752       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:00.382463       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-attacher-9p9nd\" objectUID=5123dd93-37f5-436a-9f84-c4a3b080a4d7 kind=\"EndpointSlice\" virtual=false\nI0731 08:19:00.396422       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-attacher-9p9nd\" objectUID=5123dd93-37f5-436a-9f84-c4a3b080a4d7 kind=\"EndpointSlice\" propagationPolicy=Background\nE0731 08:19:00.413738       1 tokens_controller.go:262] error synchronizing serviceaccount gc-8156/default: secrets \"default-token-jgdq2\" is forbidden: unable to create new content in namespace gc-8156 because it is being terminated\nI0731 08:19:00.499235       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-342-5124/csi-hostpath-attacher\nI0731 08:19:00.499666       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-attacher-0\" objectUID=90d49fa5-f8b3-4036-925e-64e45c8a967c kind=\"Pod\" virtual=false\nI0731 08:19:00.499926       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-attacher-5749dc7644\" objectUID=cb9d78bc-f9d3-47fe-9a2c-a1c99a8b80ca kind=\"ControllerRevision\" virtual=false\nI0731 08:19:00.502025       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-attacher-0\" objectUID=90d49fa5-f8b3-4036-925e-64e45c8a967c kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:00.502378       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-attacher-5749dc7644\" objectUID=cb9d78bc-f9d3-47fe-9a2c-a1c99a8b80ca kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:00.514550       1 pv_controller.go:1341] isVolumeReleased[pvc-3bcc9a60-8738-4250-a685-aa553165f889]: volume is released\nI0731 08:19:00.514944       1 pv_controller.go:1341] isVolumeReleased[pvc-2e4c7214-f0a2-4052-a350-42522102087a]: volume is released\nI0731 08:19:00.665701       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://eu-west-2a/vol-0adab3873782cc009\nI0731 08:19:00.665725       1 pv_controller.go:1436] volume \"pvc-3bcc9a60-8738-4250-a685-aa553165f889\" deleted\nI0731 08:19:00.675082       1 pv_controller_base.go:505] deletion of claim \"volume-1199/awsgjrqw\" was already processed\nI0731 08:19:00.708266       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpathplugin-nqsnf\" objectUID=d7d21145-f8c8-4112-a11b-bdf927c1caf0 kind=\"EndpointSlice\" virtual=false\nI0731 08:19:00.712651       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpathplugin-nqsnf\" objectUID=d7d21145-f8c8-4112-a11b-bdf927c1caf0 kind=\"EndpointSlice\" propagationPolicy=Background\nI0731 08:19:00.800553       1 aws_util.go:66] Successfully deleted EBS Disk volume aws://eu-west-2a/vol-0996b1aa6b74d16ee\nI0731 08:19:00.800583       1 pv_controller.go:1436] volume \"pvc-2e4c7214-f0a2-4052-a350-42522102087a\" deleted\nI0731 08:19:00.808081       1 pv_controller_base.go:505] deletion of claim \"provisioning-2970/aws4fcm6\" was already processed\nI0731 08:19:00.817926       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpathplugin-997c774d8\" objectUID=89df5a79-b2a3-4598-a57e-b0a647fa698c kind=\"ControllerRevision\" virtual=false\nI0731 08:19:00.818054       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-342-5124/csi-hostpathplugin\nI0731 08:19:00.818135       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpathplugin-0\" objectUID=5b7dc585-1b37-419a-a229-5dc5a26cadcb kind=\"Pod\" virtual=false\nI0731 08:19:00.819874       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpathplugin-997c774d8\" objectUID=89df5a79-b2a3-4598-a57e-b0a647fa698c kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:00.820001       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpathplugin-0\" objectUID=5b7dc585-1b37-419a-a229-5dc5a26cadcb kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:00.917839       1 namespace_controller.go:185] Namespace has been deleted provisioning-4850\nI0731 08:19:00.921375       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-provisioner-qfmmg\" objectUID=1bf6d792-53fa-429c-949c-ab8c852bd041 kind=\"EndpointSlice\" virtual=false\nI0731 08:19:00.923866       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-provisioner-qfmmg\" objectUID=1bf6d792-53fa-429c-949c-ab8c852bd041 kind=\"EndpointSlice\" propagationPolicy=Background\nI0731 08:19:01.034276       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-provisioner-5d98c9456\" objectUID=499c77b2-f1f9-4f91-b279-d5cd82d8fbcc kind=\"ControllerRevision\" virtual=false\nI0731 08:19:01.034700       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-342-5124/csi-hostpath-provisioner\nI0731 08:19:01.034757       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-provisioner-0\" objectUID=e5ba119d-b170-4274-9a35-ce033bfa611c kind=\"Pod\" virtual=false\nI0731 08:19:01.037398       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-provisioner-0\" objectUID=e5ba119d-b170-4274-9a35-ce033bfa611c kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:01.038025       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-provisioner-5d98c9456\" objectUID=499c77b2-f1f9-4f91-b279-d5cd82d8fbcc kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:01.137102       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-resizer-m5l7v\" objectUID=40f4077d-c091-4873-8aba-378aadef2a2e kind=\"EndpointSlice\" virtual=false\nI0731 08:19:01.140887       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-resizer-m5l7v\" objectUID=40f4077d-c091-4873-8aba-378aadef2a2e kind=\"EndpointSlice\" propagationPolicy=Background\nI0731 08:19:01.246778       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-58b66c85fd\" objectUID=89fd1917-3a44-426f-ad18-65d3114aa42c kind=\"ControllerRevision\" virtual=false\nI0731 08:19:01.247262       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-6170-6168/csi-mockplugin\nI0731 08:19:01.247438       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-0\" objectUID=877e6db0-d3bc-4a00-a5cf-998e2f58a4ab kind=\"Pod\" virtual=false\nI0731 08:19:01.249066       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-58b66c85fd\" objectUID=89fd1917-3a44-426f-ad18-65d3114aa42c kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:01.249816       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-0\" objectUID=877e6db0-d3bc-4a00-a5cf-998e2f58a4ab kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:01.252251       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-342-5124/csi-hostpath-resizer\nI0731 08:19:01.252267       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-resizer-5d69f858f\" objectUID=41b2fe7c-6a64-4a18-94cd-80ab6ae0c675 kind=\"ControllerRevision\" virtual=false\nI0731 08:19:01.252290       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-resizer-0\" objectUID=3f19bf18-ab6e-438d-bb2d-7e854ddd8fd9 kind=\"Pod\" virtual=false\nI0731 08:19:01.258211       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-resizer-5d69f858f\" objectUID=41b2fe7c-6a64-4a18-94cd-80ab6ae0c675 kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:01.260728       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-resizer-0\" objectUID=3f19bf18-ab6e-438d-bb2d-7e854ddd8fd9 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:01.349540       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-attacher-79458df7bb\" objectUID=d7a4293d-dcb3-4ed8-a1b6-d6f4df67adb3 kind=\"ControllerRevision\" virtual=false\nI0731 08:19:01.349919       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-6170-6168/csi-mockplugin-attacher\nI0731 08:19:01.349962       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-attacher-0\" objectUID=9ffda8af-b72f-4d88-8848-d9f253b54a17 kind=\"Pod\" virtual=false\nI0731 08:19:01.351730       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-attacher-79458df7bb\" objectUID=d7a4293d-dcb3-4ed8-a1b6-d6f4df67adb3 kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:01.351836       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-attacher-0\" objectUID=9ffda8af-b72f-4d88-8848-d9f253b54a17 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:01.360255       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-snapshotter-fkcmp\" objectUID=7a56f4e8-f320-4434-9948-816448505c06 kind=\"EndpointSlice\" virtual=false\nI0731 08:19:01.362803       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-snapshotter-fkcmp\" objectUID=7a56f4e8-f320-4434-9948-816448505c06 kind=\"EndpointSlice\" propagationPolicy=Background\nI0731 08:19:01.451786       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-6170-6168/csi-mockplugin-resizer\nI0731 08:19:01.451800       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-resizer-5cfd97bbc\" objectUID=4a06aa66-5431-4d67-b876-281ad675d366 kind=\"ControllerRevision\" virtual=false\nI0731 08:19:01.451830       1 garbagecollector.go:471] \"Processing object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-resizer-0\" objectUID=31a3adbb-e61f-4fbe-8985-1802374471a0 kind=\"Pod\" virtual=false\nI0731 08:19:01.454241       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-resizer-5cfd97bbc\" objectUID=4a06aa66-5431-4d67-b876-281ad675d366 kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:01.454377       1 garbagecollector.go:580] \"Deleting object\" object=\"csi-mock-volumes-6170-6168/csi-mockplugin-resizer-0\" objectUID=31a3adbb-e61f-4fbe-8985-1802374471a0 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:01.471951       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-snapshotter-0\" objectUID=771d5fe9-ccf5-4f89-b3a6-dc3ae84a34cf kind=\"Pod\" virtual=false\nI0731 08:19:01.472293       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-342-5124/csi-hostpath-snapshotter\nI0731 08:19:01.472356       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-342-5124/csi-hostpath-snapshotter-6fd6c94bf9\" objectUID=1e71b227-92f9-4c8d-9085-5efc4926fa69 kind=\"ControllerRevision\" virtual=false\nI0731 08:19:01.474240       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-snapshotter-0\" objectUID=771d5fe9-ccf5-4f89-b3a6-dc3ae84a34cf kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:01.474375       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-342-5124/csi-hostpath-snapshotter-6fd6c94bf9\" objectUID=1e71b227-92f9-4c8d-9085-5efc4926fa69 kind=\"ControllerRevision\" propagationPolicy=Background\nI0731 08:19:01.595916       1 namespace_controller.go:185] Namespace has been deleted svcaccounts-44\nI0731 08:19:01.612181       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9845-1185\nI0731 08:19:02.420449       1 endpoints_controller.go:368] \"Error syncing endpoints, retrying\" service=\"dns-335/dns-test-service\" err=\"Operation cannot be fulfilled on endpoints \\\"dns-test-service\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:19:02.420918       1 event.go:291] \"Event occurred\" object=\"dns-335/dns-test-service\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint dns-335/dns-test-service: Operation cannot be fulfilled on endpoints \\\"dns-test-service\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:19:02.430347       1 namespace_controller.go:185] Namespace has been deleted ephemeral-342\nI0731 08:19:02.449942       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6170\nI0731 08:19:02.518515       1 garbagecollector.go:471] \"Processing object\" object=\"dns-335/test-service-2-4vc96\" objectUID=ffafac88-8038-4a56-aca1-3ca383d4d5e1 kind=\"EndpointSlice\" virtual=false\nI0731 08:19:02.521517       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-335/test-service-2-4vc96\" objectUID=ffafac88-8038-4a56-aca1-3ca383d4d5e1 kind=\"EndpointSlice\" propagationPolicy=Background\nI0731 08:19:02.631391       1 garbagecollector.go:471] \"Processing object\" object=\"dns-335/dns-test-service-xtf7t\" objectUID=7105a063-f98b-4227-834d-ef8c4ff09ec3 kind=\"EndpointSlice\" virtual=false\nI0731 08:19:02.634818       1 garbagecollector.go:580] \"Deleting object\" object=\"dns-335/dns-test-service-xtf7t\" objectUID=7105a063-f98b-4227-834d-ef8c4ff09ec3 kind=\"EndpointSlice\" propagationPolicy=Background\nE0731 08:19:03.450966       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:03.578707       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-qhcks\" objectUID=404d728a-d2dc-4845-9162-67fe62bcfc34 kind=\"Pod\" virtual=false\nI0731 08:19:03.578846       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-rxkv8\" objectUID=69ba46de-4122-45a9-9702-dc4839a4dcf3 kind=\"Pod\" virtual=false\nI0731 08:19:03.579205       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-wqhg8\" objectUID=6a348265-8c2e-4daa-a1de-a5a905887ba4 kind=\"Pod\" virtual=false\nI0731 08:19:03.579405       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-g49cc\" objectUID=f2b7f653-11f1-4c8f-8b42-f1a6f8d48f53 kind=\"Pod\" virtual=false\nI0731 08:19:03.579628       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-4nhqn\" objectUID=3027e06c-730c-4e15-ab37-55ea898b29e3 kind=\"Pod\" virtual=false\nI0731 08:19:03.579742       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-qj7mf\" objectUID=55e4a8f7-9482-4c85-9491-cc10c969803b kind=\"Pod\" virtual=false\nI0731 08:19:03.579937       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-r8l45\" objectUID=e867d347-6101-4ba8-924c-592ff69a9b31 kind=\"Pod\" virtual=false\nI0731 08:19:03.579650       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-wm869\" objectUID=e33882b8-98a9-4316-8a1b-85b41caa50e1 kind=\"Pod\" virtual=false\nI0731 08:19:03.579664       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-ksjs6\" objectUID=972484c7-aadc-4122-86a9-3b4b5db98037 kind=\"Pod\" virtual=false\nI0731 08:19:03.580386       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-frtmt\" objectUID=af7bedd1-b718-47a1-8ab3-f56b80f7f6cb kind=\"Pod\" virtual=false\nI0731 08:19:03.579678       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-xvs5p\" objectUID=b99bee21-cf1b-4131-8fc6-cb737499ffe9 kind=\"Pod\" virtual=false\nI0731 08:19:03.579708       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-7l7lm\" objectUID=9309afe1-64fb-43d7-97c4-99d2fd277f51 kind=\"Pod\" virtual=false\nI0731 08:19:03.579730       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-795d758f88-dbr47\" objectUID=0ea9117a-9a4a-48fb-93e8-a7484c5c9620 kind=\"Pod\" virtual=false\nI0731 08:19:03.585510       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-wqhg8\" objectUID=6a348265-8c2e-4daa-a1de-a5a905887ba4 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.586584       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-qhcks\" objectUID=404d728a-d2dc-4845-9162-67fe62bcfc34 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.589187       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-dbr47\" objectUID=0ea9117a-9a4a-48fb-93e8-a7484c5c9620 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.589459       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-7l7lm\" objectUID=9309afe1-64fb-43d7-97c4-99d2fd277f51 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.589701       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-frtmt\" objectUID=af7bedd1-b718-47a1-8ab3-f56b80f7f6cb kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.589939       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-xvs5p\" objectUID=b99bee21-cf1b-4131-8fc6-cb737499ffe9 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.590185       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-ksjs6\" objectUID=972484c7-aadc-4122-86a9-3b4b5db98037 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.594834       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-wm869\" objectUID=e33882b8-98a9-4316-8a1b-85b41caa50e1 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.595150       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-r8l45\" objectUID=e867d347-6101-4ba8-924c-592ff69a9b31 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.596438       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-rxkv8\" objectUID=69ba46de-4122-45a9-9702-dc4839a4dcf3 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.596486       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-qj7mf\" objectUID=55e4a8f7-9482-4c85-9491-cc10c969803b kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.596525       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-g49cc\" objectUID=f2b7f653-11f1-4c8f-8b42-f1a6f8d48f53 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.596559       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-795d758f88-4nhqn\" objectUID=3027e06c-730c-4e15-ab37-55ea898b29e3 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.596932       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-j5bfv\" objectUID=b46f4b0f-34bd-42c2-a6c8-b7850fc415b1 kind=\"Pod\" virtual=false\nI0731 08:19:03.597527       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-c4x8n\" objectUID=bc7e3794-4e6e-49d9-a0c6-c018df4f6eeb kind=\"Pod\" virtual=false\nI0731 08:19:03.597541       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-qw4fq\" objectUID=9938a874-4232-425f-aff4-4d928f8dafab kind=\"Pod\" virtual=false\nI0731 08:19:03.597556       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-68zr4\" objectUID=028a667a-13e6-4505-923b-f1128e3673c9 kind=\"Pod\" virtual=false\nI0731 08:19:03.597568       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-2jvdx\" objectUID=d5258e3e-e245-4aff-b101-972bf9bf25dd kind=\"Pod\" virtual=false\nI0731 08:19:03.597581       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-72kt8\" objectUID=cf707ed9-9e67-412b-a24f-174b15f4503b kind=\"Pod\" virtual=false\nI0731 08:19:03.597592       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-82kt7\" objectUID=4c94a5b9-a55f-4dc5-8987-7f9585069e7b kind=\"Pod\" virtual=false\nI0731 08:19:03.612829       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-clrjm\" objectUID=e7f9130e-a8ad-49c2-8b4d-e16b3ef6cc53 kind=\"Pod\" virtual=false\nI0731 08:19:03.612984       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-kns9m\" objectUID=c13b9ded-83d0-4027-ba6a-df24d063289e kind=\"Pod\" virtual=false\nI0731 08:19:03.618572       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-dcpsn\" objectUID=221edf06-f0fc-4981-9733-1a892fb7671d kind=\"Pod\" virtual=false\nI0731 08:19:03.621304       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-c4x8n\" objectUID=bc7e3794-4e6e-49d9-a0c6-c018df4f6eeb kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.621523       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-qw4fq\" objectUID=9938a874-4232-425f-aff4-4d928f8dafab kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.621585       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-j5bfv\" objectUID=b46f4b0f-34bd-42c2-a6c8-b7850fc415b1 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.621715       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-68zr4\" objectUID=028a667a-13e6-4505-923b-f1128e3673c9 kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.624201       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-jvrp2\" objectUID=9b4a3566-6252-4366-9068-72093606f148 kind=\"Pod\" virtual=false\nI0731 08:19:03.624217       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-whkx6\" objectUID=438db2c8-3087-4690-93a7-ead5027f1ef5 kind=\"Pod\" virtual=false\nI0731 08:19:03.624247       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-nbwlb\" objectUID=7f2aad27-0dae-45f6-94b9-66ed1fa8af50 kind=\"Pod\" virtual=false\nI0731 08:19:03.624310       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-9sjc9\" objectUID=8e730702-37e1-4c0f-af54-0604eb84cdc9 kind=\"Pod\" virtual=false\nI0731 08:19:03.624342       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-cqhg8\" objectUID=30b9677c-f951-4123-ba1b-648c8e5e82f9 kind=\"Pod\" virtual=false\nI0731 08:19:03.634309       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-2jvdx\" objectUID=d5258e3e-e245-4aff-b101-972bf9bf25dd kind=\"Pod\" propagationPolicy=Background\nI0731 08:19:03.634519       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-bhgwk\" objectUID=fc035413-5422-49e6-b71b-3142a3f1d64d kind=\"Pod\" virtual=false\nI0731 08:19:03.638226       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-6p24w\" objectUID=de314ea6-e2dd-4c14-99e6-89029310076b kind=\"Pod\" virtual=false\nI0731 08:19:03.640812       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-dgw2f\" objectUID=e0b1106a-2ce7-4d93-a462-c1497c7e1750 kind=\"Pod\" virtual=false\nI0731 08:19:03.642933       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-9gvsd\" objectUID=8a36c069-0452-417f-923b-d0eda4894e85 kind=\"Pod\" virtual=false\nI0731 08:19:03.643075       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-qzk9m\" objectUID=3a281348-ece3-4b60-b4c9-5deafbd8ed76 kind=\"Pod\" virtual=false\nI0731 08:19:03.666426       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"deployment-2970/webserver-deployment\"\nI0731 08:19:03.680443       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-72kt8\" objectUID=cf707ed9-9e67-412b-a24f-174b15f4503b kind=\"Pod\" propagationPolicy=Background\nE0731 08:19:03.696434       1 tokens_controller.go:262] error synchronizing serviceaccount deployment-2970/default: secrets \"default-token-vk2v5\" is forbidden: unable to create new content in namespace deployment-2970 because it is being terminated\nI0731 08:19:03.786016       1 garbagecollector.go:580] \"Deleting object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-kns9m\" objectUID=c13b9ded-83d0-4027-ba6a-df24d063289e kind=\"Pod\" propagationPolicy=Background\nE0731 08:19:03.930550       1 tokens_controller.go:262] error synchronizing serviceaccount multi-az-7685/default: secrets \"default-token-rq5mw\" is forbidden: unable to create new content in namespace multi-az-7685 because it is being terminated\nE0731 08:19:03.932344       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-c4x8n\", UID:\"bc7e3794-4e6e-49d9-a0c6-c018df4f6eeb\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-2970\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"4926667f-c16c-477e-9be1-30b3de20b6c4\", Controller:(*bool)(0xc002dc3b80), BlockOwnerDeletion:(*bool)(0xc002dc3b81)}}}: pods \"webserver-deployment-847dcfb7fb-c4x8n\" not found\nI0731 08:19:03.938088       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-c4x8n\" objectUID=bc7e3794-4e6e-49d9-a0c6-c018df4f6eeb kind=\"Pod\" virtual=false\nE0731 08:19:03.981080       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-j5bfv\", UID:\"b46f4b0f-34bd-42c2-a6c8-b7850fc415b1\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-2970\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"4926667f-c16c-477e-9be1-30b3de20b6c4\", Controller:(*bool)(0xc002d6d607), BlockOwnerDeletion:(*bool)(0xc002d6d608)}}}: pods \"webserver-deployment-847dcfb7fb-j5bfv\" not found\nI0731 08:19:03.987484       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-j5bfv\" objectUID=b46f4b0f-34bd-42c2-a6c8-b7850fc415b1 kind=\"Pod\" virtual=false\nE0731 08:19:04.030470       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-qw4fq\", UID:\"9938a874-4232-425f-aff4-4d928f8dafab\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-2970\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"4926667f-c16c-477e-9be1-30b3de20b6c4\", Controller:(*bool)(0xc003280ea0), BlockOwnerDeletion:(*bool)(0xc003280ea1)}}}: pods \"webserver-deployment-847dcfb7fb-qw4fq\" not found\nI0731 08:19:04.035739       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-qw4fq\" objectUID=9938a874-4232-425f-aff4-4d928f8dafab kind=\"Pod\" virtual=false\nE0731 08:19:04.080744       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-68zr4\", UID:\"028a667a-13e6-4505-923b-f1128e3673c9\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-2970\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"4926667f-c16c-477e-9be1-30b3de20b6c4\", Controller:(*bool)(0xc003281160), BlockOwnerDeletion:(*bool)(0xc003281161)}}}: pods \"webserver-deployment-847dcfb7fb-68zr4\" not found\nI0731 08:19:04.086017       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-68zr4\" objectUID=028a667a-13e6-4505-923b-f1128e3673c9 kind=\"Pod\" virtual=false\nE0731 08:19:04.381449       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-2jvdx\", UID:\"d5258e3e-e245-4aff-b101-972bf9bf25dd\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-2970\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"4926667f-c16c-477e-9be1-30b3de20b6c4\", Controller:(*bool)(0xc004517d17), BlockOwnerDeletion:(*bool)(0xc004517d18)}}}: pods \"webserver-deployment-847dcfb7fb-2jvdx\" not found\nI0731 08:19:04.386663       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-2jvdx\" objectUID=d5258e3e-e245-4aff-b101-972bf9bf25dd kind=\"Pod\" virtual=false\nE0731 08:19:04.680319       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-72kt8\", UID:\"cf707ed9-9e67-412b-a24f-174b15f4503b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-2970\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"4926667f-c16c-477e-9be1-30b3de20b6c4\", Controller:(*bool)(0xc000c02db0), BlockOwnerDeletion:(*bool)(0xc000c02db1)}}}: pods \"webserver-deployment-847dcfb7fb-72kt8\" not found\nI0731 08:19:04.685717       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-72kt8\" objectUID=cf707ed9-9e67-412b-a24f-174b15f4503b kind=\"Pod\" virtual=false\nE0731 08:19:04.731205       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"webserver-deployment-847dcfb7fb-kns9m\", UID:\"c13b9ded-83d0-4027-ba6a-df24d063289e\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"deployment-2970\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"apps/v1\", Kind:\"ReplicaSet\", Name:\"webserver-deployment-847dcfb7fb\", UID:\"4926667f-c16c-477e-9be1-30b3de20b6c4\", Controller:(*bool)(0xc002dc3e60), BlockOwnerDeletion:(*bool)(0xc002dc3e61)}}}: pods \"webserver-deployment-847dcfb7fb-kns9m\" not found\nI0731 08:19:04.736418       1 garbagecollector.go:471] \"Processing object\" object=\"deployment-2970/webserver-deployment-847dcfb7fb-kns9m\" objectUID=c13b9ded-83d0-4027-ba6a-df24d063289e kind=\"Pod\" virtual=false\nI0731 08:19:04.812224       1 event.go:291] \"Event occurred\" object=\"statefulset-3566/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0731 08:19:04.864849       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-3a1d861b-2d8d-4d87-9da7-d786e0526c2e\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-04497c3e313fc8f97\") from node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:19:04.929485       1 aws.go:2014] Assigned mount device bx -> volume vol-04497c3e313fc8f97\nI0731 08:19:05.352557       1 aws.go:2427] AttachVolume volume=\"vol-04497c3e313fc8f97\" instance=\"i-0eccb4b5dfe1d0b8e\" request returned {\n  AttachTime: 2021-07-31 08:19:05.342 +0000 UTC,\n  Device: \"/dev/xvdbx\",\n  InstanceId: \"i-0eccb4b5dfe1d0b8e\",\n  State: \"attaching\",\n  VolumeId: \"vol-04497c3e313fc8f97\"\n}\nI0731 08:19:05.480485       1 namespace_controller.go:185] Namespace has been deleted gc-8156\nI0731 08:19:05.506708       1 route_controller.go:294] set node ip-172-20-61-108.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:05.506708       1 route_controller.go:294] set node ip-172-20-54-176.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:05.506720       1 route_controller.go:294] set node ip-172-20-51-93.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:05.506735       1 route_controller.go:294] set node ip-172-20-58-77.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:05.506767       1 route_controller.go:294] set node ip-172-20-60-242.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nE0731 08:19:05.587029       1 tokens_controller.go:262] error synchronizing serviceaccount configmap-1982/default: secrets \"default-token-9c764\" is forbidden: unable to create new content in namespace configmap-1982 because it is being terminated\nI0731 08:19:05.879101       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:19:05.881619       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nE0731 08:19:06.744432       1 tokens_controller.go:262] error synchronizing serviceaccount ephemeral-342-5124/default: secrets \"default-token-n8s5s\" is forbidden: unable to create new content in namespace ephemeral-342-5124 because it is being terminated\nE0731 08:19:07.176824       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:07.457617       1 aws.go:2037] Releasing in-process attachment entry: bx -> volume vol-04497c3e313fc8f97\nI0731 08:19:07.457670       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-3a1d861b-2d8d-4d87-9da7-d786e0526c2e\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-04497c3e313fc8f97\") from node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:19:07.457912       1 event.go:291] \"Event occurred\" object=\"statefulset-3566/ss-2\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-3a1d861b-2d8d-4d87-9da7-d786e0526c2e\\\" \"\nI0731 08:19:08.846278       1 namespace_controller.go:185] Namespace has been deleted deployment-2970\nI0731 08:19:08.986906       1 namespace_controller.go:185] Namespace has been deleted multi-az-7685\nI0731 08:19:09.376454       1 pv_controller.go:879] volume \"local-pv27t2s\" entered phase \"Available\"\nI0731 08:19:09.476842       1 pv_controller.go:930] claim \"persistent-local-volumes-test-6323/pvc-rrlwx\" bound to volume \"local-pv27t2s\"\nI0731 08:19:09.483884       1 pv_controller.go:879] volume \"local-pv27t2s\" entered phase \"Bound\"\nI0731 08:19:09.484050       1 pv_controller.go:982] volume \"local-pv27t2s\" bound to claim \"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:09.488712       1 pv_controller.go:823] claim \"persistent-local-volumes-test-6323/pvc-rrlwx\" entered phase \"Bound\"\nE0731 08:19:09.883535       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-2970/default: secrets \"default-token-q4r58\" is forbidden: unable to create new content in namespace provisioning-2970 because it is being terminated\nI0731 08:19:10.172419       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"volume-7161/pvc-g66j8\"\nI0731 08:19:10.176663       1 pv_controller.go:640] volume \"local-qdzwc\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:10.178949       1 pv_controller.go:879] volume \"local-qdzwc\" entered phase \"Released\"\nI0731 08:19:10.276533       1 pv_controller_base.go:505] deletion of claim \"volume-7161/pvc-g66j8\" was already processed\nI0731 08:19:10.613885       1 event.go:291] \"Event occurred\" object=\"statefulset-1562/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-1 in StatefulSet ss2 successful\"\nI0731 08:19:10.632099       1 namespace_controller.go:185] Namespace has been deleted configmap-1982\nE0731 08:19:10.686205       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-4484/pvc-8fmkk: storageclass.storage.k8s.io \"provisioning-4484\" not found\nI0731 08:19:10.686376       1 event.go:291] \"Event occurred\" object=\"provisioning-4484/pvc-8fmkk\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-4484\\\" not found\"\nI0731 08:19:10.790987       1 pv_controller.go:879] volume \"local-qpgdj\" entered phase \"Available\"\nI0731 08:19:11.343507       1 aws.go:2291] Waiting for volume \"vol-08c1a577477ed26b0\" state: actual=detaching, desired=detached\nE0731 08:19:11.590066       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0731 08:19:11.817202       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-7490/pvc-nxvb5: storageclass.storage.k8s.io \"provisioning-7490\" not found\nI0731 08:19:11.817947       1 event.go:291] \"Event occurred\" object=\"provisioning-7490/pvc-nxvb5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7490\\\" not found\"\nI0731 08:19:11.924082       1 pv_controller.go:879] volume \"local-wxwrl\" entered phase \"Available\"\nI0731 08:19:12.084016       1 stateful_set_control.go:523] StatefulSet statefulset-3566/ss terminating Pod ss-1 for update\nI0731 08:19:12.092167       1 event.go:291] \"Event occurred\" object=\"statefulset-3566/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0731 08:19:12.116555       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-6562/pvc-7sr86\"\nI0731 08:19:12.124245       1 pv_controller.go:640] volume \"local-bc89k\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:12.127139       1 pv_controller.go:879] volume \"local-bc89k\" entered phase \"Released\"\nI0731 08:19:12.222222       1 pv_controller_base.go:505] deletion of claim \"provisioning-6562/pvc-7sr86\" was already processed\nE0731 08:19:12.817881       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-9128/inline-volume-qd7s2-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0731 08:19:12.818386       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128/inline-volume-qd7s2-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0731 08:19:12.960069       1 namespace_controller.go:185] Namespace has been deleted dns-335\nI0731 08:19:13.120662       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-9128, name: inline-volume-qd7s2, uid: c067ac13-6162-4663-b4d9-f768b5605e75] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0731 08:19:13.120878       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9128/inline-volume-qd7s2-my-volume\" objectUID=ef5d6a5f-9660-47bd-a2dd-91486644d907 kind=\"PersistentVolumeClaim\" virtual=false\nI0731 08:19:13.121218       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9128/inline-volume-qd7s2\" objectUID=c067ac13-6162-4663-b4d9-f768b5605e75 kind=\"Pod\" virtual=false\nI0731 08:19:13.136633       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-9128, name: inline-volume-qd7s2-my-volume, uid: ef5d6a5f-9660-47bd-a2dd-91486644d907] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-9128, name: inline-volume-qd7s2, uid: c067ac13-6162-4663-b4d9-f768b5605e75] is deletingDependents\nI0731 08:19:13.137770       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-9128/inline-volume-qd7s2-my-volume\" objectUID=ef5d6a5f-9660-47bd-a2dd-91486644d907 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nE0731 08:19:13.139746       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-9128/inline-volume-qd7s2-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0731 08:19:13.140343       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128/inline-volume-qd7s2-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0731 08:19:13.140451       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9128/inline-volume-qd7s2-my-volume\" objectUID=ef5d6a5f-9660-47bd-a2dd-91486644d907 kind=\"PersistentVolumeClaim\" virtual=false\nI0731 08:19:13.143080       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-9128/inline-volume-qd7s2-my-volume\"\nI0731 08:19:13.145827       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-9128/inline-volume-qd7s2\" objectUID=c067ac13-6162-4663-b4d9-f768b5605e75 kind=\"Pod\" virtual=false\nI0731 08:19:13.147194       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-9128, name: inline-volume-qd7s2, uid: c067ac13-6162-4663-b4d9-f768b5605e75]\nI0731 08:19:13.407425       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-07-31 08:18:43 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdbw\",\n  InstanceId: \"i-0eccb4b5dfe1d0b8e\",\n  State: \"detaching\",\n  VolumeId: \"vol-08c1a577477ed26b0\"\n}\nI0731 08:19:13.407490       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") on node \"ip-172-20-61-108.eu-west-2.compute.internal\" \nI0731 08:19:13.435425       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") from node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nI0731 08:19:13.474843       1 aws.go:2014] Assigned mount device cs -> volume vol-08c1a577477ed26b0\nI0731 08:19:13.525419       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-5462/pvc-q4qg5\"\nI0731 08:19:13.530821       1 pv_controller.go:640] volume \"local-jdhnb\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:13.533645       1 pv_controller.go:879] volume \"local-jdhnb\" entered phase \"Released\"\nI0731 08:19:13.629348       1 pv_controller_base.go:505] deletion of claim \"provisioning-5462/pvc-q4qg5\" was already processed\nI0731 08:19:13.687843       1 event.go:291] \"Event occurred\" object=\"statefulset-1562/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nW0731 08:19:13.691476       1 endpointslice_controller.go:305] Error syncing endpoint slices for service \"statefulset-1562/test\", retrying. Error: EndpointSlice informer cache is out of date\nI0731 08:19:13.835261       1 aws.go:2427] AttachVolume volume=\"vol-08c1a577477ed26b0\" instance=\"i-02a39ac8c52407743\" request returned {\n  AttachTime: 2021-07-31 08:19:13.828 +0000 UTC,\n  Device: \"/dev/xvdcs\",\n  InstanceId: \"i-02a39ac8c52407743\",\n  State: \"attaching\",\n  VolumeId: \"vol-08c1a577477ed26b0\"\n}\nI0731 08:19:14.348095       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-3353-3601\nI0731 08:19:14.924661       1 namespace_controller.go:185] Namespace has been deleted provisioning-2970\nI0731 08:19:15.000934       1 deployment_controller.go:583] \"Deployment has been deleted\" deployment=\"webhook-8835/sample-webhook-deployment\"\nI0731 08:19:15.218124       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" need=1 creating=1\nI0731 08:19:15.218462       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 1\"\nI0731 08:19:15.224903       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-3302/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:19:15.227960       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-lglm6\"\nI0731 08:19:15.446871       1 route_controller.go:294] set node ip-172-20-58-77.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:15.446895       1 route_controller.go:294] set node ip-172-20-60-242.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:15.446904       1 route_controller.go:294] set node ip-172-20-61-108.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:15.446915       1 route_controller.go:294] set node ip-172-20-51-93.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:15.446924       1 route_controller.go:294] set node ip-172-20-54-176.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:15.513851       1 pv_controller.go:930] claim \"provisioning-4484/pvc-8fmkk\" bound to volume \"local-qpgdj\"\nI0731 08:19:15.521851       1 pv_controller.go:879] volume \"local-qpgdj\" entered phase \"Bound\"\nI0731 08:19:15.521877       1 pv_controller.go:982] volume \"local-qpgdj\" bound to claim \"provisioning-4484/pvc-8fmkk\"\nI0731 08:19:15.526755       1 pv_controller.go:823] claim \"provisioning-4484/pvc-8fmkk\" entered phase \"Bound\"\nI0731 08:19:15.526932       1 pv_controller.go:930] claim \"provisioning-7490/pvc-nxvb5\" bound to volume \"local-wxwrl\"\nI0731 08:19:15.533429       1 pv_controller.go:879] volume \"local-wxwrl\" entered phase \"Bound\"\nI0731 08:19:15.533453       1 pv_controller.go:982] volume \"local-wxwrl\" bound to claim \"provisioning-7490/pvc-nxvb5\"\nI0731 08:19:15.537836       1 pv_controller.go:823] claim \"provisioning-7490/pvc-nxvb5\" entered phase \"Bound\"\nI0731 08:19:15.682952       1 event.go:291] \"Event occurred\" object=\"statefulset-3566/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0731 08:19:15.937834       1 aws.go:2037] Releasing in-process attachment entry: cs -> volume vol-08c1a577477ed26b0\nI0731 08:19:15.937886       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") from node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nI0731 08:19:15.938060       1 event.go:291] \"Event occurred\" object=\"fsgroupchangepolicy-2750/pod-20081d80-3f18-43f8-a12d-30012dfb1647\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\\\" \"\nI0731 08:19:16.107178       1 namespace_controller.go:185] Namespace has been deleted volume-1199\nI0731 08:19:16.222785       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128-6889/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0731 08:19:16.552127       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128-6889/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0731 08:19:16.759124       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128-6889/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0731 08:19:16.968967       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128-6889/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0731 08:19:17.200414       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128-6889/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0731 08:19:17.351156       1 tokens_controller.go:262] error synchronizing serviceaccount volume-7161/default: secrets \"default-token-2fhkk\" is forbidden: unable to create new content in namespace volume-7161 because it is being terminated\nI0731 08:19:17.491825       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128/inline-volume-tester-vknfj-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-9128\\\" or manually created by system administrator\"\nI0731 08:19:18.320973       1 namespace_controller.go:185] Namespace has been deleted nettest-1351\nI0731 08:19:18.607474       1 namespace_controller.go:185] Namespace has been deleted nettest-7980\nE0731 08:19:19.851680       1 tokens_controller.go:262] error synchronizing serviceaccount secrets-4694/default: secrets \"default-token-8pxwm\" is forbidden: unable to create new content in namespace secrets-4694 because it is being terminated\nI0731 08:19:19.923429       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-7871/pvc-6cvlt\"\nI0731 08:19:19.936416       1 pv_controller.go:640] volume \"nfs-j4phz\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:19.940601       1 pv_controller.go:879] volume \"nfs-j4phz\" entered phase \"Released\"\nI0731 08:19:20.267028       1 pv_controller.go:879] volume \"pvc-94a4cf04-5c76-4bed-9ec8-171764d86b31\" entered phase \"Bound\"\nI0731 08:19:20.267345       1 pv_controller.go:982] volume \"pvc-94a4cf04-5c76-4bed-9ec8-171764d86b31\" bound to claim \"ephemeral-9128/inline-volume-tester-vknfj-my-volume-0\"\nI0731 08:19:20.273378       1 pv_controller.go:823] claim \"ephemeral-9128/inline-volume-tester-vknfj-my-volume-0\" entered phase \"Bound\"\nI0731 08:19:20.453015       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-7871/pvc-q6kwm\"\nI0731 08:19:20.457369       1 pv_controller.go:640] volume \"nfs-6fjcb\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:20.462167       1 pv_controller.go:879] volume \"nfs-6fjcb\" entered phase \"Released\"\nE0731 08:19:20.703813       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-6217/pvc-wnptj: storageclass.storage.k8s.io \"provisioning-6217\" not found\nI0731 08:19:20.703858       1 event.go:291] \"Event occurred\" object=\"provisioning-6217/pvc-wnptj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6217\\\" not found\"\nI0731 08:19:20.808882       1 pv_controller.go:879] volume \"local-4gx25\" entered phase \"Available\"\nI0731 08:19:20.964730       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"pv-7871/pvc-kdc7r\"\nI0731 08:19:20.969682       1 pv_controller.go:640] volume \"nfs-v98sx\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:20.972978       1 pv_controller.go:879] volume \"nfs-v98sx\" entered phase \"Released\"\nI0731 08:19:21.093812       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-94a4cf04-5c76-4bed-9ec8-171764d86b31\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-9128^0158678b-f1d8-11eb-8a18-c2ccd6bd424c\") from node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:19:21.274992       1 pv_controller_base.go:505] deletion of claim \"pv-7871/pvc-6cvlt\" was already processed\nI0731 08:19:21.308551       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"aws-wbnsj\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-00dcc4c13455182fd\") on node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:19:21.309987       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"aws-wbnsj\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-00dcc4c13455182fd\") on node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:19:21.378734       1 pv_controller_base.go:505] deletion of claim \"pv-7871/pvc-q6kwm\" was already processed\nI0731 08:19:21.486103       1 pv_controller_base.go:505] deletion of claim \"pv-7871/pvc-kdc7r\" was already processed\nI0731 08:19:21.639422       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-94a4cf04-5c76-4bed-9ec8-171764d86b31\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-9128^0158678b-f1d8-11eb-8a18-c2ccd6bd424c\") from node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:19:21.639784       1 event.go:291] \"Event occurred\" object=\"ephemeral-9128/inline-volume-tester-vknfj\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-94a4cf04-5c76-4bed-9ec8-171764d86b31\\\" \"\nI0731 08:19:21.720703       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-7490/pvc-nxvb5\"\nI0731 08:19:21.726938       1 pv_controller.go:640] volume \"local-wxwrl\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:21.729892       1 pv_controller.go:879] volume \"local-wxwrl\" entered phase \"Released\"\nI0731 08:19:21.826801       1 pv_controller_base.go:505] deletion of claim \"provisioning-7490/pvc-nxvb5\" was already processed\nI0731 08:19:22.018231       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" need=2 deleting=1\nI0731 08:19:22.018863       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 2\"\nI0731 08:19:22.018905       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0731 08:19:22.019074       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-3302/test-rolling-update-with-lb-864fb64577-r7brg\"\nI0731 08:19:22.028255       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" need=2 creating=1\nI0731 08:19:22.032408       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nI0731 08:19:22.042568       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-7hsd9\"\nI0731 08:19:22.047160       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-r7brg\"\nI0731 08:19:22.048236       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-3302/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nW0731 08:19:22.056807       1 endpointslice_controller.go:305] Error syncing endpoint slices for service \"deployment-3302/test-rolling-update-with-lb\", retrying. Error: EndpointSlice informer cache is out of date\nI0731 08:19:22.117545       1 namespace_controller.go:185] Namespace has been deleted ephemeral-342-5124\nE0731 08:19:22.319256       1 tokens_controller.go:262] error synchronizing serviceaccount provisioning-5462/default: secrets \"default-token-q5mqc\" is forbidden: unable to create new content in namespace provisioning-5462 because it is being terminated\nI0731 08:19:22.379321       1 namespace_controller.go:185] Namespace has been deleted volume-7161\nI0731 08:19:22.695984       1 pv_controller.go:879] volume \"local-pv2pcbz\" entered phase \"Available\"\nI0731 08:19:22.714656       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1379/pod-3563ad04-96a8-40bf-bdd8-760e107750ed\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:22.714694       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:22.795200       1 pv_controller.go:930] claim \"persistent-local-volumes-test-9130/pvc-wxv4x\" bound to volume \"local-pv2pcbz\"\nI0731 08:19:22.801238       1 pv_controller.go:879] volume \"local-pv2pcbz\" entered phase \"Bound\"\nI0731 08:19:22.801267       1 pv_controller.go:982] volume \"local-pv2pcbz\" bound to claim \"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:22.812915       1 pv_controller.go:823] claim \"persistent-local-volumes-test-9130/pvc-wxv4x\" entered phase \"Bound\"\nE0731 08:19:23.246162       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:23.473342       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1379/pod-3563ad04-96a8-40bf-bdd8-760e107750ed\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:23.473562       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:23.783608       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 1\"\nI0731 08:19:23.784071       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" need=1 deleting=1\nI0731 08:19:23.784209       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0731 08:19:23.784388       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-3302/test-rolling-update-with-lb-864fb64577-82jvz\"\nI0731 08:19:23.809458       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-82jvz\"\nI0731 08:19:23.813022       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-5ff6986c95 to 3\"\nI0731 08:19:23.814673       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" need=3 creating=1\nI0731 08:19:23.828149       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-5ff6986c95-g4f7n\"\nE0731 08:19:23.979981       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:24.167436       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1379/pod-3563ad04-96a8-40bf-bdd8-760e107750ed\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:24.167458       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nE0731 08:19:24.592531       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-6523/inline-volume-mpfw7-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0731 08:19:24.592870       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523/inline-volume-mpfw7-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0731 08:19:24.811631       1 event.go:291] \"Event occurred\" object=\"job-8694/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-p7w2j\"\nI0731 08:19:24.816175       1 event.go:291] \"Event occurred\" object=\"job-8694/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-cbg5j\"\nI0731 08:19:24.897213       1 graph_builder.go:587] add [v1/Pod, namespace: ephemeral-6523, name: inline-volume-mpfw7, uid: 0695e837-f690-4fb4-8146-ccbbff0b1439] to the attemptToDelete, because it's waiting for its dependents to be deleted\nI0731 08:19:24.897404       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6523/inline-volume-mpfw7-my-volume\" objectUID=ccca3435-88d8-484c-b600-2d868f9cd105 kind=\"PersistentVolumeClaim\" virtual=false\nI0731 08:19:24.897779       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6523/inline-volume-mpfw7\" objectUID=0695e837-f690-4fb4-8146-ccbbff0b1439 kind=\"Pod\" virtual=false\nI0731 08:19:24.899953       1 garbagecollector.go:595] adding [v1/PersistentVolumeClaim, namespace: ephemeral-6523, name: inline-volume-mpfw7-my-volume, uid: ccca3435-88d8-484c-b600-2d868f9cd105] to attemptToDelete, because its owner [v1/Pod, namespace: ephemeral-6523, name: inline-volume-mpfw7, uid: 0695e837-f690-4fb4-8146-ccbbff0b1439] is deletingDependents\nI0731 08:19:24.901175       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6523/inline-volume-mpfw7-my-volume\" objectUID=ccca3435-88d8-484c-b600-2d868f9cd105 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0731 08:19:24.903664       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6523/inline-volume-mpfw7-my-volume\" objectUID=ccca3435-88d8-484c-b600-2d868f9cd105 kind=\"PersistentVolumeClaim\" virtual=false\nE0731 08:19:24.903997       1 pv_controller.go:1452] error finding provisioning plugin for claim ephemeral-6523/inline-volume-mpfw7-my-volume: storageclass.storage.k8s.io \"no-such-storage-class\" not found\nI0731 08:19:24.904293       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523/inline-volume-mpfw7-my-volume\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"no-such-storage-class\\\" not found\"\nI0731 08:19:24.907629       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"ephemeral-6523/inline-volume-mpfw7-my-volume\"\nI0731 08:19:24.911851       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6523/inline-volume-mpfw7\" objectUID=0695e837-f690-4fb4-8146-ccbbff0b1439 kind=\"Pod\" virtual=false\nI0731 08:19:24.912243       1 garbagecollector.go:580] \"Deleting object\" object=\"ephemeral-6523/inline-volume-mpfw7-my-volume\" objectUID=ccca3435-88d8-484c-b600-2d868f9cd105 kind=\"PersistentVolumeClaim\" propagationPolicy=Background\nI0731 08:19:24.914009       1 garbagecollector.go:590] remove DeleteDependents finalizer for item [v1/Pod, namespace: ephemeral-6523, name: inline-volume-mpfw7, uid: 0695e837-f690-4fb4-8146-ccbbff0b1439]\nE0731 08:19:24.914211       1 garbagecollector.go:350] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"v1\", Kind:\"PersistentVolumeClaim\", Name:\"inline-volume-mpfw7-my-volume\", UID:\"ccca3435-88d8-484c-b600-2d868f9cd105\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"ephemeral-6523\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Pod\", Name:\"inline-volume-mpfw7\", UID:\"0695e837-f690-4fb4-8146-ccbbff0b1439\", Controller:(*bool)(0xc0032ace6a), BlockOwnerDeletion:(*bool)(0xc0032ace6b)}}}: persistentvolumeclaims \"inline-volume-mpfw7-my-volume\" not found\nI0731 08:19:24.919848       1 garbagecollector.go:471] \"Processing object\" object=\"ephemeral-6523/inline-volume-mpfw7-my-volume\" objectUID=ccca3435-88d8-484c-b600-2d868f9cd105 kind=\"PersistentVolumeClaim\" virtual=false\nI0731 08:19:24.935354       1 namespace_controller.go:185] Namespace has been deleted secrets-4694\nI0731 08:19:24.954994       1 namespace_controller.go:185] Namespace has been deleted provisioning-6562\nI0731 08:19:25.294760       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6323/pod-607eb63e-e6d1-4cb1-8cf8-6243287aa846\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:25.294882       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:25.374481       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1379/pod-3563ad04-96a8-40bf-bdd8-760e107750ed\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:25.374576       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:25.377551       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-1379/pod-493b1612-2d42-4135-8ff5-3c9ed79eadec\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:25.377651       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:25.452597       1 route_controller.go:294] set node ip-172-20-61-108.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:25.452597       1 route_controller.go:294] set node ip-172-20-54-176.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:25.452621       1 route_controller.go:294] set node ip-172-20-58-77.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:25.452628       1 route_controller.go:294] set node ip-172-20-51-93.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:25.452632       1 route_controller.go:294] set node ip-172-20-60-242.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:26.115625       1 namespace_controller.go:185] Namespace has been deleted replication-controller-3930\nE0731 08:19:26.337349       1 tokens_controller.go:262] error synchronizing serviceaccount container-probe-7282/default: secrets \"default-token-wcb5z\" is forbidden: unable to create new content in namespace container-probe-7282 because it is being terminated\nI0731 08:19:26.750646       1 aws.go:2291] Waiting for volume \"vol-00dcc4c13455182fd\" state: actual=detaching, desired=detached\nI0731 08:19:27.365707       1 namespace_controller.go:185] Namespace has been deleted provisioning-5462\nE0731 08:19:27.640123       1 pv_controller.go:1452] error finding provisioning plugin for claim volumemode-7666/pvc-cjgrg: storageclass.storage.k8s.io \"volumemode-7666\" not found\nI0731 08:19:27.640754       1 event.go:291] \"Event occurred\" object=\"volumemode-7666/pvc-cjgrg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-7666\\\" not found\"\nI0731 08:19:27.744792       1 pv_controller.go:879] volume \"local-2hrw7\" entered phase \"Available\"\nI0731 08:19:28.007501       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523-4151/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0731 08:19:28.343073       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523-4151/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0731 08:19:28.553609       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523-4151/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0731 08:19:28.762177       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523-4151/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0731 08:19:28.775464       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6323/pod-607eb63e-e6d1-4cb1-8cf8-6243287aa846\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:28.775629       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:28.804430       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-07-31 08:18:47 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcv\",\n  InstanceId: \"i-01d69da9e39710e15\",\n  State: \"detaching\",\n  VolumeId: \"vol-00dcc4c13455182fd\"\n}\nI0731 08:19:28.804467       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"aws-wbnsj\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-00dcc4c13455182fd\") on node \"ip-172-20-58-77.eu-west-2.compute.internal\" \nI0731 08:19:28.877908       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-wbnsj\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-00dcc4c13455182fd\") from node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nE0731 08:19:28.916134       1 tokens_controller.go:262] error synchronizing serviceaccount topology-8068/default: secrets \"default-token-w6ksc\" is forbidden: unable to create new content in namespace topology-8068 because it is being terminated\nI0731 08:19:28.917642       1 aws.go:2014] Assigned mount device cf -> volume vol-00dcc4c13455182fd\nE0731 08:19:28.945253       1 tokens_controller.go:262] error synchronizing serviceaccount nettest-1558/default: secrets \"default-token-p27hp\" is forbidden: unable to create new content in namespace nettest-1558 because it is being terminated\nI0731 08:19:28.954017       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"provisioning-4484/pvc-8fmkk\"\nI0731 08:19:28.961673       1 pv_controller.go:640] volume \"local-qpgdj\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:28.965279       1 pv_controller.go:879] volume \"local-qpgdj\" entered phase \"Released\"\nI0731 08:19:28.978626       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523-4151/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0731 08:19:29.068807       1 pv_controller_base.go:505] deletion of claim \"provisioning-4484/pvc-8fmkk\" was already processed\nE0731 08:19:29.118310       1 pv_controller.go:1452] error finding provisioning plugin for claim volumemode-8886/pvc-wwxzg: storageclass.storage.k8s.io \"volumemode-8886\" not found\nI0731 08:19:29.118937       1 event.go:291] \"Event occurred\" object=\"volumemode-8886/pvc-wwxzg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-8886\\\" not found\"\nI0731 08:19:29.198222       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-1379/pvc-z579x\"\nI0731 08:19:29.210547       1 pv_controller.go:640] volume \"local-pvh6vxk\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:29.214592       1 pv_controller.go:879] volume \"local-pvh6vxk\" entered phase \"Released\"\nI0731 08:19:29.223568       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-1379/pvc-z579x\" was already processed\nI0731 08:19:29.297322       1 aws.go:2427] AttachVolume volume=\"vol-00dcc4c13455182fd\" instance=\"i-02a39ac8c52407743\" request returned {\n  AttachTime: 2021-07-31 08:19:29.287 +0000 UTC,\n  Device: \"/dev/xvdcf\",\n  InstanceId: \"i-02a39ac8c52407743\",\n  State: \"attaching\",\n  VolumeId: \"vol-00dcc4c13455182fd\"\n}\nI0731 08:19:29.312013       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523/inline-volume-tester-z85xb-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-6523\\\" or manually created by system administrator\"\nI0731 08:19:29.312439       1 pv_controller.go:879] volume \"aws-bzlvv\" entered phase \"Available\"\nI0731 08:19:29.612948       1 event.go:291] \"Event occurred\" object=\"deployment-7770/test-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-deployment-7b4c744884 to 2\"\nI0731 08:19:29.613255       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-7770/test-deployment-7b4c744884\" need=2 creating=2\nI0731 08:19:29.622941       1 event.go:291] \"Event occurred\" object=\"deployment-7770/test-deployment-7b4c744884\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-deployment-7b4c744884-r4m5s\"\nI0731 08:19:29.628013       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-7770/test-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:19:29.628625       1 event.go:291] \"Event occurred\" object=\"deployment-7770/test-deployment-7b4c744884\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-deployment-7b4c744884-gk724\"\nI0731 08:19:30.172307       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6323/pod-607eb63e-e6d1-4cb1-8cf8-6243287aa846\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:30.172525       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:30.514104       1 pv_controller.go:930] claim \"volumemode-7666/pvc-cjgrg\" bound to volume \"local-2hrw7\"\nI0731 08:19:30.519697       1 pv_controller.go:879] volume \"local-2hrw7\" entered phase \"Bound\"\nI0731 08:19:30.519825       1 pv_controller.go:982] volume \"local-2hrw7\" bound to claim \"volumemode-7666/pvc-cjgrg\"\nI0731 08:19:30.524279       1 pv_controller.go:823] claim \"volumemode-7666/pvc-cjgrg\" entered phase \"Bound\"\nI0731 08:19:30.524355       1 pv_controller.go:930] claim \"volumemode-8886/pvc-wwxzg\" bound to volume \"aws-bzlvv\"\nI0731 08:19:30.529637       1 pv_controller.go:879] volume \"aws-bzlvv\" entered phase \"Bound\"\nI0731 08:19:30.529662       1 pv_controller.go:982] volume \"aws-bzlvv\" bound to claim \"volumemode-8886/pvc-wwxzg\"\nI0731 08:19:30.534710       1 pv_controller.go:823] claim \"volumemode-8886/pvc-wwxzg\" entered phase \"Bound\"\nI0731 08:19:30.535157       1 pv_controller.go:930] claim \"provisioning-6217/pvc-wnptj\" bound to volume \"local-4gx25\"\nI0731 08:19:30.535606       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523/inline-volume-tester-z85xb-my-volume-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-ephemeral-6523\\\" or manually created by system administrator\"\nI0731 08:19:30.542093       1 pv_controller.go:879] volume \"local-4gx25\" entered phase \"Bound\"\nI0731 08:19:30.542139       1 pv_controller.go:982] volume \"local-4gx25\" bound to claim \"provisioning-6217/pvc-wnptj\"\nI0731 08:19:30.546243       1 pv_controller.go:823] claim \"provisioning-6217/pvc-wnptj\" entered phase \"Bound\"\nI0731 08:19:30.769120       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6323/pod-607eb63e-e6d1-4cb1-8cf8-6243287aa846\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:30.769492       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:30.771743       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-6323/pod-b58547ba-8153-4b75-9f91-67f8d004632b\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:30.771765       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:31.083799       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" need=0 deleting=1\nI0731 08:19:31.083831       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95]\nI0731 08:19:31.083900       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-rolling-update-with-lb-864fb64577\" pod=\"deployment-3302/test-rolling-update-with-lb-864fb64577-bjw2z\"\nI0731 08:19:31.084202       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-864fb64577 to 0\"\nI0731 08:19:31.096317       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-864fb64577\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-864fb64577-bjw2z\"\nI0731 08:19:31.121364       1 pv_controller.go:879] volume \"pvc-e2a2da26-efa3-451b-aa62-5438326506ed\" entered phase \"Bound\"\nI0731 08:19:31.121393       1 pv_controller.go:982] volume \"pvc-e2a2da26-efa3-451b-aa62-5438326506ed\" bound to claim \"ephemeral-6523/inline-volume-tester-z85xb-my-volume-0\"\nI0731 08:19:31.126064       1 pv_controller.go:823] claim \"ephemeral-6523/inline-volume-tester-z85xb-my-volume-0\" entered phase \"Bound\"\nI0731 08:19:31.396651       1 aws.go:2037] Releasing in-process attachment entry: cf -> volume vol-00dcc4c13455182fd\nI0731 08:19:31.396699       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"aws-wbnsj\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-00dcc4c13455182fd\") from node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nI0731 08:19:31.396957       1 event.go:291] \"Event occurred\" object=\"volume-7501/aws-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-wbnsj\\\" \"\nI0731 08:19:31.475227       1 namespace_controller.go:185] Namespace has been deleted container-probe-7282\nI0731 08:19:31.533629       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"fsgroupchangepolicy-2750/aws6kjqh\"\nE0731 08:19:31.535232       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:31.538546       1 pv_controller.go:640] volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" is released and reclaim policy \"Delete\" will be executed\nI0731 08:19:31.541256       1 pv_controller.go:879] volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" entered phase \"Released\"\nI0731 08:19:31.542584       1 pv_controller.go:1341] isVolumeReleased[pvc-7266dba5-1a16-4752-8443-eb2cc822e00c]: volume is released\nI0731 08:19:31.655389       1 aws_util.go:62] Error deleting EBS Disk volume aws://eu-west-2a/vol-08c1a577477ed26b0: error deleting EBS volume \"vol-08c1a577477ed26b0\" since volume is currently attached to \"i-02a39ac8c52407743\"\nE0731 08:19:31.655486       1 goroutinemap.go:150] Operation for \"delete-pvc-7266dba5-1a16-4752-8443-eb2cc822e00c[47c50baa-8f67-435a-aa08-22f1d84de969]\" failed. No retries permitted until 2021-07-31 08:19:32.15545776 +0000 UTC m=+891.973541070 (durationBeforeRetry 500ms). Error: \"error deleting EBS volume \\\"vol-08c1a577477ed26b0\\\" since volume is currently attached to \\\"i-02a39ac8c52407743\\\"\"\nI0731 08:19:31.655604       1 event.go:291] \"Event occurred\" object=\"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" kind=\"PersistentVolume\" apiVersion=\"v1\" type=\"Normal\" reason=\"VolumeDelete\" message=\"error deleting EBS volume \\\"vol-08c1a577477ed26b0\\\" since volume is currently attached to \\\"i-02a39ac8c52407743\\\"\"\nI0731 08:19:31.922138       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"aws-bzlvv\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0c741f16f725c8d5c\") from node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nI0731 08:19:31.934020       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-59c4fc87b4\" need=1 creating=1\nI0731 08:19:31.934338       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 1\"\nI0731 08:19:31.940503       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-mkclf\"\nI0731 08:19:31.951072       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-3302/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:19:31.964469       1 aws.go:2014] Assigned mount device cr -> volume vol-0c741f16f725c8d5c\nI0731 08:19:32.169823       1 event.go:291] \"Event occurred\" object=\"job-8694/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-qhvkr\"\nI0731 08:19:32.396892       1 aws.go:2427] AttachVolume volume=\"vol-0c741f16f725c8d5c\" instance=\"i-02a39ac8c52407743\" request returned {\n  AttachTime: 2021-07-31 08:19:32.388 +0000 UTC,\n  Device: \"/dev/xvdcr\",\n  InstanceId: \"i-02a39ac8c52407743\",\n  State: \"attaching\",\n  VolumeId: \"vol-0c741f16f725c8d5c\"\n}\nE0731 08:19:32.448264       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0731 08:19:32.888081       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-6323/default: secrets \"default-token-g24gj\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-6323 because it is being terminated\nI0731 08:19:32.899813       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-6323/pvc-rrlwx\"\nI0731 08:19:32.908023       1 pv_controller.go:640] volume \"local-pv27t2s\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:32.911547       1 pv_controller.go:879] volume \"local-pv27t2s\" entered phase \"Released\"\nI0731 08:19:32.917645       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-6323/pvc-rrlwx\" was already processed\nI0731 08:19:32.940863       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") on node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nI0731 08:19:32.942683       1 operation_generator.go:1483] Verified volume is safe to detach for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") on node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nI0731 08:19:32.984522       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-6170-6168\nI0731 08:19:33.092102       1 namespace_controller.go:185] Namespace has been deleted provisioning-7490\nI0731 08:19:33.142504       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume \"pvc-e2a2da26-efa3-451b-aa62-5438326506ed\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6523^07cd9662-f1d8-11eb-b431-5288cc22efa7\") from node \"ip-172-20-51-93.eu-west-2.compute.internal\" \nI0731 08:19:33.279466       1 stateful_set_control.go:523] StatefulSet statefulset-3566/ss terminating Pod ss-0 for update\nI0731 08:19:33.285967       1 event.go:291] \"Event occurred\" object=\"statefulset-3566/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0731 08:19:33.517341       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9130/pod-0194b3ae-900a-4382-8905-0751c8f5a381\" PVC=\"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:33.517367       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:33.631861       1 stateful_set_control.go:523] StatefulSet statefulset-1562/ss2 terminating Pod ss2-2 for update\nI0731 08:19:33.638381       1 garbagecollector.go:471] \"Processing object\" object=\"services-8885/multi-endpoint-test-jgnw6\" objectUID=c62aa8d6-628e-4772-a21a-77b6d314493b kind=\"EndpointSlice\" virtual=false\nI0731 08:19:33.641728       1 garbagecollector.go:580] \"Deleting object\" object=\"services-8885/multi-endpoint-test-jgnw6\" objectUID=c62aa8d6-628e-4772-a21a-77b6d314493b kind=\"EndpointSlice\" propagationPolicy=Background\nI0731 08:19:33.642456       1 event.go:291] \"Event occurred\" object=\"statefulset-1562/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss2-2 in StatefulSet ss2 successful\"\nI0731 08:19:33.682868       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"pvc-e2a2da26-efa3-451b-aa62-5438326506ed\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-ephemeral-6523^07cd9662-f1d8-11eb-b431-5288cc22efa7\") from node \"ip-172-20-51-93.eu-west-2.compute.internal\" \nI0731 08:19:33.683013       1 event.go:291] \"Event occurred\" object=\"ephemeral-6523/inline-volume-tester-z85xb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-e2a2da26-efa3-451b-aa62-5438326506ed\\\" \"\nE0731 08:19:33.764907       1 pv_controller.go:1452] error finding provisioning plugin for claim provisioning-3284/pvc-vl2zt: storageclass.storage.k8s.io \"provisioning-3284\" not found\nI0731 08:19:33.765342       1 event.go:291] \"Event occurred\" object=\"provisioning-3284/pvc-vl2zt\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-3284\\\" not found\"\nI0731 08:19:33.872330       1 pv_controller.go:879] volume \"local-8l55v\" entered phase \"Available\"\nW0731 08:19:34.105453       1 utils.go:265] Service services-1014/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0731 08:19:34.122119       1 namespace_controller.go:185] Namespace has been deleted topology-8068\nI0731 08:19:34.208556       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"services-1014/service-headless\" need=3 creating=3\nW0731 08:19:34.212975       1 utils.go:265] Service services-1014/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0731 08:19:34.213659       1 event.go:291] \"Event occurred\" object=\"services-1014/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-hngm4\"\nI0731 08:19:34.220571       1 event.go:291] \"Event occurred\" object=\"services-1014/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-ml8cn\"\nI0731 08:19:34.221083       1 event.go:291] \"Event occurred\" object=\"services-1014/service-headless\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: service-headless-kjnkz\"\nW0731 08:19:34.221474       1 utils.go:265] Service services-1014/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nW0731 08:19:34.234804       1 utils.go:265] Service services-1014/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0731 08:19:34.508734       1 aws.go:2037] Releasing in-process attachment entry: cr -> volume vol-0c741f16f725c8d5c\nI0731 08:19:34.508963       1 operation_generator.go:368] AttachVolume.Attach succeeded for volume \"aws-bzlvv\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-0c741f16f725c8d5c\") from node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nI0731 08:19:34.509490       1 event.go:291] \"Event occurred\" object=\"volumemode-8886/pod-8a63b8d6-bff1-4ead-9776-5902b6d1bd4d\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"aws-bzlvv\\\" \"\nE0731 08:19:34.877081       1 tokens_controller.go:262] error synchronizing serviceaccount port-forwarding-5542/default: secrets \"default-token-fthpl\" is forbidden: unable to create new content in namespace port-forwarding-5542 because it is being terminated\nI0731 08:19:34.939926       1 namespace_controller.go:185] Namespace has been deleted security-context-9032\nI0731 08:19:35.442878       1 route_controller.go:294] set node ip-172-20-60-242.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:35.442878       1 route_controller.go:294] set node ip-172-20-51-93.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:35.442904       1 route_controller.go:294] set node ip-172-20-54-176.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:35.442931       1 route_controller.go:294] set node ip-172-20-58-77.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:35.443015       1 route_controller.go:294] set node ip-172-20-61-108.eu-west-2.compute.internal with NodeNetworkUnavailable=false was canceled because it is already set\nI0731 08:19:35.559128       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"webhook-900/sample-webhook-deployment-78988fc6cd\" need=1 creating=1\nI0731 08:19:35.559599       1 event.go:291] \"Event occurred\" object=\"webhook-900/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-78988fc6cd to 1\"\nI0731 08:19:35.569494       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"webhook-900/sample-webhook-deployment\" err=\"Operation cannot be fulfilled on deployments.apps \\\"sample-webhook-deployment\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0731 08:19:35.571906       1 event.go:291] \"Event occurred\" object=\"webhook-900/sample-webhook-deployment-78988fc6cd\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-78988fc6cd-h5wzp\"\nE0731 08:19:36.336990       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:36.357465       1 namespace_controller.go:185] Namespace has been deleted container-runtime-4591\nE0731 08:19:36.548412       1 tokens_controller.go:262] error synchronizing serviceaccount events-5662/default: secrets \"default-token-bg8x4\" is forbidden: unable to create new content in namespace events-5662 because it is being terminated\nE0731 08:19:36.749565       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:37.203621       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9130/pod-0194b3ae-900a-4382-8905-0751c8f5a381\" PVC=\"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:37.203647       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:37.402583       1 pvc_protection_controller.go:303] \"Pod uses PVC\" pod=\"persistent-local-volumes-test-9130/pod-0194b3ae-900a-4382-8905-0751c8f5a381\" PVC=\"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:37.402963       1 pvc_protection_controller.go:181] \"Keeping PVC because it is being used\" PVC=\"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:37.408159       1 pvc_protection_controller.go:291] \"PVC is unused\" PVC=\"persistent-local-volumes-test-9130/pvc-wxv4x\"\nI0731 08:19:37.413866       1 pv_controller.go:640] volume \"local-pv2pcbz\" is released and reclaim policy \"Retain\" will be executed\nI0731 08:19:37.417898       1 pv_controller.go:879] volume \"local-pv2pcbz\" entered phase \"Released\"\nI0731 08:19:37.419926       1 pv_controller_base.go:505] deletion of claim \"persistent-local-volumes-test-9130/pvc-wxv4x\" was already processed\nE0731 08:19:37.590441       1 pv_controller.go:1452] error finding provisioning plugin for claim volume-2764/pvc-dtsp9: storageclass.storage.k8s.io \"volume-2764\" not found\nI0731 08:19:37.590717       1 event.go:291] \"Event occurred\" object=\"volume-2764/pvc-dtsp9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-2764\\\" not found\"\nI0731 08:19:37.695611       1 pv_controller.go:879] volume \"local-gjx8l\" entered phase \"Available\"\nI0731 08:19:38.404055       1 aws.go:2291] Waiting for volume \"vol-08c1a577477ed26b0\" state: actual=detaching, desired=detached\nI0731 08:19:38.570633       1 event.go:291] \"Event occurred\" object=\"job-8694/fail-once-non-local\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: fail-once-non-local-nh448\"\nE0731 08:19:38.577828       1 job_controller.go:404] Error syncing job: failed pod(s) detected for job key \"job-8694/fail-once-non-local\"\nI0731 08:19:38.694926       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set test-rolling-update-with-lb-5ff6986c95 to 2\"\nI0731 08:19:38.697520       1 replica_set.go:595] \"Too many replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" need=2 deleting=1\nI0731 08:19:38.697620       1 replica_set.go:223] \"Found related ReplicaSets\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" relatedReplicaSets=[test-rolling-update-with-lb-864fb64577 test-rolling-update-with-lb-5ff6986c95 test-rolling-update-with-lb-59c4fc87b4]\nI0731 08:19:38.697794       1 controller_utils.go:602] \"Deleting pod\" controller=\"test-rolling-update-with-lb-5ff6986c95\" pod=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95-g4f7n\"\nI0731 08:19:38.715963       1 replica_set.go:559] \"Too few replicas\" replicaSet=\"deployment-3302/test-rolling-update-with-lb-59c4fc87b4\" need=2 creating=1\nI0731 08:19:38.716666       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set test-rolling-update-with-lb-59c4fc87b4 to 2\"\nI0731 08:19:38.716764       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-5ff6986c95\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: test-rolling-update-with-lb-5ff6986c95-g4f7n\"\nI0731 08:19:38.735852       1 event.go:291] \"Event occurred\" object=\"deployment-3302/test-rolling-update-with-lb-59c4fc87b4\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-with-lb-59c4fc87b4-h9w6v\"\nI0731 08:19:38.749845       1 deployment_controller.go:490] \"Error syncing deployment\" deployment=\"deployment-3302/test-rolling-update-with-lb\" err=\"Operation cannot be fulfilled on deployments.apps \\\"test-rolling-update-with-lb\\\": the object has been modified; please apply your changes to the latest version and try again\"\nE0731 08:19:38.765846       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0731 08:19:38.923879       1 namespace_controller.go:185] Namespace has been deleted kubectl-9633\nI0731 08:19:39.387978       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1379\nE0731 08:19:39.447746       1 tokens_controller.go:262] error synchronizing serviceaccount disruption-706/default: secrets \"default-token-9b59n\" is forbidden: unable to create new content in namespace disruption-706 because it is being terminated\nI0731 08:19:39.492920       1 event.go:291] \"Event occurred\" object=\"statefulset-1562/ss2\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss2-2 in StatefulSet ss2 successful\"\nE0731 08:19:39.820501       1 tokens_controller.go:262] error synchronizing serviceaccount persistent-local-volumes-test-9130/default: secrets \"default-token-rnl2d\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-9130 because it is being terminated\nW0731 08:19:40.213339       1 utils.go:265] Service services-1014/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0731 08:19:40.286644       1 namespace_controller.go:185] Namespace has been deleted provisioning-4484\nW0731 08:19:40.471805       1 utils.go:265] Service services-1014/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0731 08:19:40.471787       1 aws.go:2517] waitForAttachmentStatus returned non-nil attachment with state=detached: {\n  AttachTime: 2021-07-31 08:19:13 +0000 UTC,\n  DeleteOnTermination: false,\n  Device: \"/dev/xvdcs\",\n  InstanceId: \"i-02a39ac8c52407743\",\n  State: \"detaching\",\n  VolumeId: \"vol-08c1a577477ed26b0\"\n}\nI0731 08:19:40.472495       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume \"pvc-7266dba5-1a16-4752-8443-eb2cc822e00c\" (UniqueName: \"kubernetes.io/aws-ebs/aws://eu-west-2a/vol-08c1a577477ed26b0\") on node \"ip-172-20-54-176.eu-west-2.compute.internal\" \nE0731 08:19:41.030687       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nW0731 08:19:41.483608       1 utils.go:265] Service services-1014/service-headless using reserved endpoint slices label, skipping label service.kubernetes.io/headless: \nI0731 08:19:41.621814       1 namespace_controller.go:185] Namespace has been deleted events-5662\n==== END logs for container kube-controller-manager of pod kube-system/kube-controller-manager-ip-172-20-60-242.eu-west-2.compute.internal ====\n==== START logs for container kube-proxy of pod kube-system/kube-proxy-ip-172-20-51-93.eu-west-2.compute.internal ====\nI0731 08:03:21.997488       1 flags.go:59] FLAG: --add-dir-header=\"false\"\nI0731 08:03:21.998516       1 flags.go:59] FLAG: --alsologtostderr=\"true\"\nI0731 08:03:21.998531       1 flags.go:59] FLAG: --bind-address=\"0.0.0.0\"\nI0731 08:03:21.998544       1 flags.go:59] FLAG: --bind-address-hard-fail=\"false\"\nI0731 08:03:21.998551       1 flags.go:59] FLAG: --boot-id-file=\"/proc/sys/kernel/random/boot_id\"\nI0731 08:03:21.998557       1 flags.go:59] FLAG: --cleanup=\"false\"\nI0731 08:03:21.998562       1 flags.go:59] FLAG: --cluster-cidr=\"100.96.0.0/11\"\nI0731 08:03:21.998572       1 flags.go:59] FLAG: --config=\"\"\nI0731 08:03:21.998577       1 flags.go:59] FLAG: --config-sync-period=\"15m0s\"\nI0731 08:03:21.998590       1 flags.go:59] FLAG: --conntrack-max-per-core=\"131072\"\nI0731 08:03:21.998601       1 flags.go:59] FLAG: --conntrack-min=\"131072\"\nI0731 08:03:21.998606       1 flags.go:59] FLAG: --conntrack-tcp-timeout-close-wait=\"1h0m0s\"\nI0731 08:03:21.998617       1 flags.go:59] FLAG: --conntrack-tcp-timeout-established=\"24h0m0s\"\nI0731 08:03:21.998623       1 flags.go:59] FLAG: --detect-local-mode=\"\"\nI0731 08:03:21.998633       1 flags.go:59] FLAG: --feature-gates=\"\"\nI0731 08:03:21.998641       1 flags.go:59] FLAG: --healthz-bind-address=\"0.0.0.0:10256\"\nI0731 08:03:21.998647       1 flags.go:59] FLAG: --healthz-port=\"10256\"\nI0731 08:03:21.998654       1 flags.go:59] FLAG: --help=\"false\"\nI0731 08:03:21.998661       1 flags.go:59] FLAG: --hostname-override=\"ip-172-20-51-93.eu-west-2.compute.internal\"\nI0731 08:03:21.998667       1 flags.go:59] FLAG: --iptables-masquerade-bit=\"14\"\nI0731 08:03:21.998672       1 flags.go:59] FLAG: --iptables-min-sync-period=\"1s\"\nI0731 08:03:21.998681       1 flags.go:59] FLAG: --iptables-sync-period=\"30s\"\nI0731 08:03:21.998700       1 flags.go:59] FLAG: --ipvs-exclude-cidrs=\"[]\"\nI0731 08:03:21.998718       1 flags.go:59] FLAG: --ipvs-min-sync-period=\"0s\"\nI0731 08:03:21.998724       1 flags.go:59] FLAG: --ipvs-scheduler=\"\"\nI0731 08:03:21.998729       1 flags.go:59] FLAG: --ipvs-strict-arp=\"false\"\nI0731 08:03:21.998735       1 flags.go:59] FLAG: --ipvs-sync-period=\"30s\"\nI0731 08:03:21.998745       1 flags.go:59] FLAG: --ipvs-tcp-timeout=\"0s\"\nI0731 08:03:21.998749       1 flags.go:59] FLAG: --ipvs-tcpfin-timeout=\"0s\"\nI0731 08:03:21.998753       1 flags.go:59] FLAG: --ipvs-udp-timeout=\"0s\"\nI0731 08:03:21.998760       1 flags.go:59] FLAG: --kube-api-burst=\"10\"\nI0731 08:03:21.998767       1 flags.go:59] FLAG: --kube-api-content-type=\"application/vnd.kubernetes.protobuf\"\nI0731 08:03:21.998774       1 flags.go:59] FLAG: --kube-api-qps=\"5\"\nI0731 08:03:21.998784       1 flags.go:59] FLAG: --kubeconfig=\"/var/lib/kube-proxy/kubeconfig\"\nI0731 08:03:21.998790       1 flags.go:59] FLAG: --log-backtrace-at=\":0\"\nI0731 08:03:21.998804       1 flags.go:59] FLAG: --log-dir=\"\"\nI0731 08:03:21.998810       1 flags.go:59] FLAG: --log-file=\"/var/log/kube-proxy.log\"\nI0731 08:03:21.998820       1 flags.go:59] FLAG: --log-file-max-size=\"1800\"\nI0731 08:03:21.998826       1 flags.go:59] FLAG: --log-flush-frequency=\"5s\"\nI0731 08:03:21.998834       1 flags.go:59] FLAG: --logtostderr=\"false\"\nI0731 08:03:21.998838       1 flags.go:59] FLAG: --machine-id-file=\"/etc/machine-id,/var/lib/dbus/machine-id\"\nI0731 08:03:21.998844       1 flags.go:59] FLAG: --masquerade-all=\"false\"\nI0731 08:03:21.998848       1 flags.go:59] FLAG: --master=\"https://api.internal.e2e-ce144e612b-83c0c.test-cncf-aws.k8s.io\"\nI0731 08:03:21.998853       1 flags.go:59] FLAG: --metrics-bind-address=\"127.0.0.1:10249\"\nI0731 08:03:21.998859       1 flags.go:59] FLAG: --metrics-port=\"10249\"\nI0731 08:03:21.998864       1 flags.go:59] FLAG: --nodeport-addresses=\"[]\"\nI0731 08:03:21.998870       1 flags.go:59] FLAG: --one-output=\"false\"\nI0731 08:03:21.998877       1 flags.go:59] FLAG: --oom-score-adj=\"-998\"\nI0731 08:03:21.998882       1 flags.go:59] FLAG: --profiling=\"false\"\nI0731 08:03:21.998889       1 flags.go:59] FLAG: --proxy-mode=\"\"\nI0731 08:03:21.998894       1 flags.go:59] FLAG: --proxy-port-range=\"\"\nI0731 08:03:21.998900       1 flags.go:59] FLAG: --show-hidden-metrics-for-version=\"\"\nI0731 08:03:21.998904       1 flags.go:59] FLAG: --skip-headers=\"false\"\nI0731 08:03:21.998908       1 flags.go:59] FLAG: --skip-log-headers=\"false\"\nI0731 08:03:21.998912       1 flags.go:59] FLAG: --stderrthreshold=\"2\"\nI0731 08:03:21.998916       1 flags.go:59] FLAG: --udp-timeout=\"250ms\"\nI0731 08:03:21.998920       1 flags.go:59] FLAG: --v=\"2\"\nI0731 08:03:21.998925       1 flags.go:59] FLAG: --version=\"false\"\nI0731 08:03:21.998934       1 flags.go:59] FLAG: --vmodule=\"\"\nI0731 08:03:21.998939       1 flags.go:59] FLAG: --write-config-to=\"\"\nW0731 08:03:21.998991       1 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.\nI0731 08:03:21.999077       1 feature_gate.go:243] feature gates: &{map[]}\nI0731 08:03:21.999195       1 feature_gate.go:243] feature gates: &{map[]}\nI0731 08:03:22.113944       1 node.go:172] Successfully retrieved node IP: 172.20.51.93\nI0731 08:03:22.113981       1 server_others.go:140] Detected node IP 172.20.51.93\nW0731 08:03:22.114003       1 server_others.go:598] Unknown proxy mode \"\", assuming iptables proxy\nI0731 08:03:22.114122       1 server_others.go:177] DetectLocalMode: 'ClusterCIDR'\nI0731 08:03:22.224484       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary\nI0731 08:03:22.224514       1 server_others.go:212] Using iptables Proxier.\nI0731 08:03:22.224525       1 server_others.go:219] creating dualStackProxier for iptables.\nW0731 08:03:22.224535       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6\nI0731 08:03:22.224616       1 utils.go:375] Changed sysctl \"net/ipv4/conf/all/route_localnet\": 0 -> 1\nI0731 08:03:22.225199       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv4 mark=\"0x00004000\"\nI0731 08:03:22.225311       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv4 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0731 08:03:22.225734       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv4\nI0731 08:03:22.225820       1 proxier.go:282] \"using iptables mark for masquerade\" ipFamily=IPv6 mark=\"0x00004000\"\nI0731 08:03:22.225855       1 proxier.go:330] \"iptables sync params\" ipFamily=IPv6 minSyncPeriod=\"1s\" syncPeriod=\"30s\" burstSyncs=2\nI0731 08:03:22.225872       1 proxier.go:340] \"iptables supports --random-fully\" ipFamily=IPv6\nI0731 08:03:22.227222       1 server.go:643] Version: v1.21.3\nI0731 08:03:22.229505       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144\nI0731 08:03:22.229540       1 conntrack.go:52] Setting nf_conntrack_max to 262144\nI0731 08:03:22.229649       1 mount_linux.go:192] Detected OS without systemd\nI0731 08:03:22.229930       1 conntrack.go:83] Setting conntrack hashsize to 65536\nI0731 08:03:22.234789       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\nI0731 08:03:22.234861       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\nI0731 08:03:22.241944       1 config.go:315] Starting service config controller\nI0731 08:03:22.242729       1 config.go:224] Starting endpoint slice config controller\nI0731 08:03:22.242752       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\nI0731 08:03:22.243038       1 shared_informer.go:240] Waiting for caches to sync for service config\nI0731 08:03:22.247732       1 service.go:306] Service volume-expand-877-447/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:03:22.247771       1 service.go:306] Service crd-webhook-7764/e2e-test-crd-conversion-webhook updated: 1 ports\nI0731 08:03:22.247791       1 service.go:306] Service volume-8375-8198/csi-hostpathplugin updated: 1 ports\nI0731 08:03:22.247812       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-provisioner updated: 1 ports\nI0731 08:03:22.247826       1 service.go:306] Service conntrack-8739/boom-server updated: 1 ports\nI0731 08:03:22.247841       1 service.go:306] Service provisioning-3759-7888/csi-hostpathplugin updated: 1 ports\nI0731 08:03:22.247859       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-provisioner updated: 1 ports\nI0731 08:03:22.247871       1 service.go:306] Service volume-expand-877-447/csi-hostpath-resizer updated: 1 ports\nI0731 08:03:22.247907       1 service.go:306] Service volume-expand-877-447/csi-hostpath-provisioner updated: 1 ports\nI0731 08:03:22.247931       1 service.go:306] Service kube-system/kube-dns updated: 3 ports\nI0731 08:03:22.247948       1 service.go:306] Service volume-8375-8198/csi-hostpath-attacher updated: 1 ports\nI0731 08:03:22.247963       1 service.go:306] Service volume-8375-8198/csi-hostpath-resizer updated: 1 ports\nI0731 08:03:22.247983       1 service.go:306] Service volume-8375-8198/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:03:22.248003       1 service.go:306] Service services-9864/nodeport-update-service updated: 1 ports\nI0731 08:03:22.248019       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-attacher updated: 1 ports\nI0731 08:03:22.248033       1 service.go:306] Service default/kubernetes updated: 1 ports\nI0731 08:03:22.248049       1 service.go:306] Service volume-8375-8198/csi-hostpath-provisioner updated: 1 ports\nI0731 08:03:22.248062       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-resizer updated: 1 ports\nI0731 08:03:22.248089       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:03:22.248113       1 service.go:306] Service volume-expand-877-447/csi-hostpathplugin updated: 1 ports\nI0731 08:03:22.248130       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-attacher updated: 1 ports\nI0731 08:03:22.248222       1 service.go:306] Service volume-expand-6870-9761/csi-hostpathplugin updated: 1 ports\nI0731 08:03:22.248245       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-resizer updated: 1 ports\nI0731 08:03:22.248264       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:03:22.248283       1 service.go:306] Service volume-expand-877-447/csi-hostpath-attacher updated: 1 ports\nW0731 08:03:22.251212       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0731 08:03:22.259285       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI0731 08:03:22.343326       1 shared_informer.go:247] Caches are synced for service config \nI0731 08:03:22.343477       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0731 08:03:22.343562       1 proxier.go:816] \"Not syncing iptables until Services and Endpoints have been received from master\"\nI0731 08:03:22.345717       1 shared_informer.go:247] Caches are synced for endpoint slice config \nI0731 08:03:22.345777       1 service.go:421] Adding new service port \"volume-8375-8198/csi-hostpath-resizer:dummy\" at 100.64.239.177:12345/TCP\nI0731 08:03:22.345801       1 service.go:421] Adding new service port \"volume-expand-6870-9761/csi-hostpathplugin:dummy\" at 100.71.96.84:12345/TCP\nI0731 08:03:22.345814       1 service.go:421] Adding new service port \"volume-expand-877-447/csi-hostpath-snapshotter:dummy\" at 100.68.54.62:12345/TCP\nI0731 08:03:22.345826       1 service.go:421] Adding new service port \"volume-8375-8198/csi-hostpathplugin:dummy\" at 100.70.146.137:12345/TCP\nI0731 08:03:22.345837       1 service.go:421] Adding new service port \"provisioning-3759-7888/csi-hostpath-provisioner:dummy\" at 100.64.130.149:12345/TCP\nI0731 08:03:22.345851       1 service.go:421] Adding new service port \"volume-expand-877-447/csi-hostpath-attacher:dummy\" at 100.67.177.171:12345/TCP\nI0731 08:03:22.345862       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns\" at 100.64.0.10:53/UDP\nI0731 08:03:22.345872       1 service.go:421] Adding new service port \"kube-system/kube-dns:dns-tcp\" at 100.64.0.10:53/TCP\nI0731 08:03:22.345882       1 service.go:421] Adding new service port \"kube-system/kube-dns:metrics\" at 100.64.0.10:9153/TCP\nI0731 08:03:22.345945       1 service.go:421] Adding new service port \"volume-expand-6870-9761/csi-hostpath-resizer:dummy\" at 100.68.148.250:12345/TCP\nI0731 08:03:22.345957       1 service.go:421] Adding new service port \"provisioning-3759-7888/csi-hostpath-resizer:dummy\" at 100.70.185.214:12345/TCP\nI0731 08:03:22.345968       1 service.go:421] Adding new service port \"volume-8375-8198/csi-hostpath-snapshotter:dummy\" at 100.65.152.60:12345/TCP\nI0731 08:03:22.345979       1 service.go:421] Adding new service port \"provisioning-3759-7888/csi-hostpath-attacher:dummy\" at 100.71.45.20:12345/TCP\nI0731 08:03:22.345990       1 service.go:421] Adding new service port \"default/kubernetes:https\" at 100.64.0.1:443/TCP\nI0731 08:03:22.346008       1 service.go:421] Adding new service port \"volume-8375-8198/csi-hostpath-provisioner:dummy\" at 100.68.184.56:12345/TCP\nI0731 08:03:22.346019       1 service.go:421] Adding new service port \"volume-expand-6870-9761/csi-hostpath-snapshotter:dummy\" at 100.64.174.139:12345/TCP\nI0731 08:03:22.346029       1 service.go:421] Adding new service port \"conntrack-8739/boom-server\" at 100.70.141.165:9000/TCP\nI0731 08:03:22.346039       1 service.go:421] Adding new service port \"provisioning-3759-7888/csi-hostpathplugin:dummy\" at 100.64.174.250:12345/TCP\nI0731 08:03:22.346050       1 service.go:421] Adding new service port \"volume-8375-8198/csi-hostpath-attacher:dummy\" at 100.65.213.104:12345/TCP\nI0731 08:03:22.346061       1 service.go:421] Adding new service port \"volume-expand-877-447/csi-hostpathplugin:dummy\" at 100.65.81.215:12345/TCP\nI0731 08:03:22.346071       1 service.go:421] Adding new service port \"volume-expand-6870-9761/csi-hostpath-attacher:dummy\" at 100.66.201.242:12345/TCP\nI0731 08:03:22.346089       1 service.go:421] Adding new service port \"provisioning-3759-7888/csi-hostpath-snapshotter:dummy\" at 100.64.237.243:12345/TCP\nI0731 08:03:22.346101       1 service.go:421] Adding new service port \"volume-expand-877-447/csi-hostpath-provisioner:dummy\" at 100.70.34.191:12345/TCP\nI0731 08:03:22.346111       1 service.go:421] Adding new service port \"services-9864/nodeport-update-service:tcp-port\" at 100.68.102.196:80/TCP\nI0731 08:03:22.346121       1 service.go:421] Adding new service port \"crd-webhook-7764/e2e-test-crd-conversion-webhook\" at 100.68.63.174:9443/TCP\nI0731 08:03:22.346131       1 service.go:421] Adding new service port \"volume-expand-6870-9761/csi-hostpath-provisioner:dummy\" at 100.66.239.58:12345/TCP\nI0731 08:03:22.346144       1 service.go:421] Adding new service port \"volume-expand-877-447/csi-hostpath-resizer:dummy\" at 100.64.98.181:12345/TCP\nI0731 08:03:22.346454       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"kube-system/kube-dns:dns\" clusterIP=\"100.64.0.10\"\nI0731 08:03:22.346474       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:22.419535       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-9864/nodeport-update-service:tcp-port\\\" (:31945/tcp4)\"\nI0731 08:03:22.835165       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"489.401129ms\"\nI0731 08:03:22.835199       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:22.887767       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.555628ms\"\nI0731 08:03:30.177977       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:30.223956       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.070915ms\"\nI0731 08:03:33.053364       1 service.go:306] Service crd-webhook-7764/e2e-test-crd-conversion-webhook updated: 0 ports\nI0731 08:03:33.053406       1 service.go:446] Removing service port \"crd-webhook-7764/e2e-test-crd-conversion-webhook\"\nI0731 08:03:33.053460       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:33.094660       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.241996ms\"\nI0731 08:03:33.094757       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:33.157571       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.850006ms\"\nI0731 08:03:34.158548       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:34.198627       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.143333ms\"\nI0731 08:03:35.199326       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:35.248485       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.233511ms\"\nI0731 08:03:36.249491       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:36.304926       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.510077ms\"\nI0731 08:03:37.305680       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:37.351773       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.179196ms\"\nI0731 08:03:38.005494       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-attacher updated: 0 ports\nI0731 08:03:38.059429       1 service.go:446] Removing service port \"provisioning-3759-7888/csi-hostpath-attacher:dummy\"\nI0731 08:03:38.059516       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:38.094365       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.923365ms\"\nI0731 08:03:38.400704       1 service.go:306] Service provisioning-3759-7888/csi-hostpathplugin updated: 0 ports\nI0731 08:03:38.670431       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-provisioner updated: 0 ports\nI0731 08:03:38.908164       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-resizer updated: 0 ports\nI0731 08:03:39.097225       1 service.go:446] Removing service port \"provisioning-3759-7888/csi-hostpathplugin:dummy\"\nI0731 08:03:39.097272       1 service.go:446] Removing service port \"provisioning-3759-7888/csi-hostpath-provisioner:dummy\"\nI0731 08:03:39.097282       1 service.go:446] Removing service port \"provisioning-3759-7888/csi-hostpath-resizer:dummy\"\nI0731 08:03:39.097376       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:39.152266       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.038999ms\"\nI0731 08:03:39.247103       1 service.go:306] Service provisioning-3759-7888/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:03:40.152764       1 service.go:446] Removing service port \"provisioning-3759-7888/csi-hostpath-snapshotter:dummy\"\nI0731 08:03:40.152919       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:40.190858       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.091204ms\"\nI0731 08:03:41.355959       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:41.388429       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.527606ms\"\nI0731 08:03:48.160632       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:48.211585       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.00962ms\"\nI0731 08:03:53.921256       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-attacher updated: 0 ports\nI0731 08:03:53.921302       1 service.go:446] Removing service port \"volume-expand-6870-9761/csi-hostpath-attacher:dummy\"\nI0731 08:03:53.921370       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:53.960448       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.136571ms\"\nI0731 08:03:53.987332       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:54.057848       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.561801ms\"\nI0731 08:03:54.256281       1 service.go:306] Service volume-expand-6870-9761/csi-hostpathplugin updated: 0 ports\nI0731 08:03:54.473980       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-provisioner updated: 0 ports\nI0731 08:03:54.711662       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-resizer updated: 0 ports\nI0731 08:03:54.946997       1 service.go:306] Service volume-expand-6870-9761/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:03:54.947035       1 service.go:446] Removing service port \"volume-expand-6870-9761/csi-hostpath-provisioner:dummy\"\nI0731 08:03:54.947050       1 service.go:446] Removing service port \"volume-expand-6870-9761/csi-hostpath-resizer:dummy\"\nI0731 08:03:54.947065       1 service.go:446] Removing service port \"volume-expand-6870-9761/csi-hostpath-snapshotter:dummy\"\nI0731 08:03:54.947078       1 service.go:446] Removing service port \"volume-expand-6870-9761/csi-hostpathplugin:dummy\"\nI0731 08:03:54.947188       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:54.997234       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.183941ms\"\nI0731 08:03:55.082183       1 service.go:306] Service services-9864/nodeport-update-service updated: 2 ports\nI0731 08:03:55.997774       1 service.go:423] Updating existing service port \"services-9864/nodeport-update-service:tcp-port\" at 100.68.102.196:80/TCP\nI0731 08:03:55.997805       1 service.go:421] Adding new service port \"services-9864/nodeport-update-service:udp-port\" at 100.68.102.196:80/UDP\nI0731 08:03:55.997961       1 proxier.go:841] \"Stale service\" protocol=\"udp\" svcPortName=\"services-9864/nodeport-update-service:udp-port\" clusterIP=\"100.68.102.196\"\nI0731 08:03:55.998022       1 proxier.go:848] Stale udp service NodePort services-9864/nodeport-update-service:udp-port -> 32552\nI0731 08:03:55.998040       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:56.045555       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-9864/nodeport-update-service:udp-port\\\" (:32552/udp4)\"\nI0731 08:03:56.045739       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-9864/nodeport-update-service:tcp-port\\\" (:30549/tcp4)\"\nI0731 08:03:56.060511       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.76595ms\"\nI0731 08:03:58.730930       1 service.go:306] Service services-8431/externalname-service updated: 1 ports\nI0731 08:03:58.730976       1 service.go:421] Adding new service port \"services-8431/externalname-service:http\" at 100.66.154.72:80/TCP\nI0731 08:03:58.731041       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:58.789742       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"58.749816ms\"\nI0731 08:03:58.789838       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:03:58.853036       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"63.248027ms\"\nI0731 08:04:03.159098       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:03.196555       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.496105ms\"\nI0731 08:04:05.558754       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:05.595770       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.223859ms\"\nI0731 08:04:08.237095       1 service.go:306] Service services-4367/nodeport-reuse updated: 1 ports\nI0731 08:04:08.237144       1 service.go:421] Adding new service port \"services-4367/nodeport-reuse\" at 100.67.33.164:80/TCP\nI0731 08:04:08.237206       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:08.267672       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-4367/nodeport-reuse\\\" (:31952/tcp4)\"\nI0731 08:04:08.273107       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.960837ms\"\nI0731 08:04:08.273186       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:08.304506       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.36155ms\"\nI0731 08:04:08.366586       1 service.go:306] Service services-4367/nodeport-reuse updated: 0 ports\nI0731 08:04:09.267393       1 service.go:446] Removing service port \"services-4367/nodeport-reuse\"\nI0731 08:04:09.267497       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:09.327176       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.780407ms\"\nI0731 08:04:09.503633       1 service.go:306] Service conntrack-8739/boom-server updated: 0 ports\nI0731 08:04:10.327936       1 service.go:446] Removing service port \"conntrack-8739/boom-server\"\nI0731 08:04:10.328040       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:10.378620       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.682651ms\"\nI0731 08:04:12.037065       1 service.go:306] Service services-4367/nodeport-reuse updated: 1 ports\nI0731 08:04:12.037109       1 service.go:421] Adding new service port \"services-4367/nodeport-reuse\" at 100.65.199.64:80/TCP\nI0731 08:04:12.037179       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:12.073282       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-4367/nodeport-reuse\\\" (:31952/tcp4)\"\nI0731 08:04:12.080324       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.207979ms\"\nI0731 08:04:12.188022       1 service.go:306] Service services-4367/nodeport-reuse updated: 0 ports\nI0731 08:04:13.080787       1 service.go:446] Removing service port \"services-4367/nodeport-reuse\"\nI0731 08:04:13.080893       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:13.148294       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"67.507608ms\"\nI0731 08:04:15.939150       1 service.go:306] Service services-8431/externalname-service updated: 0 ports\nI0731 08:04:15.939187       1 service.go:446] Removing service port \"services-8431/externalname-service:http\"\nI0731 08:04:15.939258       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:15.971272       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.075557ms\"\nI0731 08:04:15.971479       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:16.002841       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.524979ms\"\nI0731 08:04:16.180094       1 service.go:306] Service webhook-4131/e2e-test-webhook updated: 1 ports\nI0731 08:04:17.003059       1 service.go:421] Adding new service port \"webhook-4131/e2e-test-webhook\" at 100.69.12.206:8443/TCP\nI0731 08:04:17.003209       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:04:17.036066       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.000771ms\"\nI0731 08:04:17.842904       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\nI0731 08:04:17.843228       1 streamwatcher.go:111] Unexpected EOF during watch stream event decoding: unexpected EOF\nW0731 08:04:47.699614       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0731 08:04:48.585563       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nW0731 08:04:48.588361       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice\nI0731 08:05:03.568510       1 service.go:306] Service services-9864/nodeport-update-service updated: 0 ports\nI0731 08:05:03.568549       1 service.go:446] Removing service port \"services-9864/nodeport-update-service:tcp-port\"\nI0731 08:05:03.568562       1 service.go:446] Removing service port \"services-9864/nodeport-update-service:udp-port\"\nI0731 08:05:03.568627       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:03.756800       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"188.242917ms\"\nI0731 08:05:16.353541       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:16.441763       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.285517ms\"\nI0731 08:05:20.922966       1 service.go:306] Service proxy-6074/test-service updated: 1 ports\nI0731 08:05:20.923012       1 service.go:421] Adding new service port \"proxy-6074/test-service\" at 100.66.106.69:80/TCP\nI0731 08:05:20.923074       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:20.958160       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.139966ms\"\nI0731 08:05:20.978260       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:21.011958       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.752528ms\"\nI0731 08:05:25.052463       1 service.go:306] Service services-3210/sourceip-test updated: 1 ports\nI0731 08:05:25.052508       1 service.go:421] Adding new service port \"services-3210/sourceip-test\" at 100.66.214.72:8080/TCP\nI0731 08:05:25.052568       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:25.092336       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.824827ms\"\nI0731 08:05:25.092654       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:25.124920       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.303266ms\"\nI0731 08:05:26.329119       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:26.363078       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.022084ms\"\nI0731 08:05:26.612785       1 service.go:306] Service provisioning-1115-2220/csi-hostpath-attacher updated: 1 ports\nI0731 08:05:26.933516       1 service.go:306] Service provisioning-1115-2220/csi-hostpathplugin updated: 1 ports\nI0731 08:05:27.141916       1 service.go:306] Service provisioning-1115-2220/csi-hostpath-provisioner updated: 1 ports\nI0731 08:05:27.141962       1 service.go:421] Adding new service port \"provisioning-1115-2220/csi-hostpathplugin:dummy\" at 100.64.97.78:12345/TCP\nI0731 08:05:27.141980       1 service.go:421] Adding new service port \"provisioning-1115-2220/csi-hostpath-provisioner:dummy\" at 100.68.211.11:12345/TCP\nI0731 08:05:27.141995       1 service.go:421] Adding new service port \"provisioning-1115-2220/csi-hostpath-attacher:dummy\" at 100.64.21.192:12345/TCP\nI0731 08:05:27.142048       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:27.179093       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.128753ms\"\nI0731 08:05:27.355121       1 service.go:306] Service provisioning-1115-2220/csi-hostpath-resizer updated: 1 ports\nI0731 08:05:27.564477       1 service.go:306] Service provisioning-1115-2220/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:05:28.115031       1 service.go:421] Adding new service port \"provisioning-1115-2220/csi-hostpath-resizer:dummy\" at 100.70.242.205:12345/TCP\nI0731 08:05:28.115065       1 service.go:421] Adding new service port \"provisioning-1115-2220/csi-hostpath-snapshotter:dummy\" at 100.65.31.139:12345/TCP\nI0731 08:05:28.115214       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:28.157422       1 service.go:306] Service proxy-6074/test-service updated: 0 ports\nI0731 08:05:28.158942       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.923342ms\"\nI0731 08:05:29.160111       1 service.go:446] Removing service port \"proxy-6074/test-service\"\nI0731 08:05:29.160211       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:29.202470       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.368034ms\"\nI0731 08:05:30.203698       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:30.245633       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.0386ms\"\nI0731 08:05:31.247985       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:31.400176       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"152.648418ms\"\nI0731 08:05:32.240021       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:32.330593       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"90.67196ms\"\nI0731 08:05:35.015667       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:35.053985       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.361626ms\"\nI0731 08:05:35.419128       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:35.461101       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.023034ms\"\nI0731 08:05:36.623277       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:36.658834       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.558127ms\"\nI0731 08:05:37.631383       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:37.672922       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.605044ms\"\nI0731 08:05:54.085897       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-attacher updated: 1 ports\nI0731 08:05:54.085948       1 service.go:421] Adding new service port \"ephemeral-5708-8271/csi-hostpath-attacher:dummy\" at 100.71.243.252:12345/TCP\nI0731 08:05:54.086011       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:54.136770       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.817755ms\"\nI0731 08:05:54.137181       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:54.176501       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.692447ms\"\nI0731 08:05:54.395193       1 service.go:306] Service ephemeral-5708-8271/csi-hostpathplugin updated: 1 ports\nI0731 08:05:54.656674       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-provisioner updated: 1 ports\nI0731 08:05:54.864373       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-resizer updated: 1 ports\nI0731 08:05:55.082443       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:05:55.103456       1 service.go:421] Adding new service port \"ephemeral-5708-8271/csi-hostpathplugin:dummy\" at 100.64.36.231:12345/TCP\nI0731 08:05:55.103501       1 service.go:421] Adding new service port \"ephemeral-5708-8271/csi-hostpath-provisioner:dummy\" at 100.66.246.41:12345/TCP\nI0731 08:05:55.103516       1 service.go:421] Adding new service port \"ephemeral-5708-8271/csi-hostpath-resizer:dummy\" at 100.64.103.117:12345/TCP\nI0731 08:05:55.103530       1 service.go:421] Adding new service port \"ephemeral-5708-8271/csi-hostpath-snapshotter:dummy\" at 100.66.131.114:12345/TCP\nI0731 08:05:55.103592       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:55.141624       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.17293ms\"\nI0731 08:05:59.176848       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-attacher updated: 1 ports\nI0731 08:05:59.177030       1 service.go:421] Adding new service port \"volume-expand-1672-2530/csi-hostpath-attacher:dummy\" at 100.66.179.55:12345/TCP\nI0731 08:05:59.177103       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:59.226499       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.473046ms\"\nI0731 08:05:59.226663       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:05:59.277634       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.089916ms\"\nI0731 08:05:59.491975       1 service.go:306] Service volume-expand-1672-2530/csi-hostpathplugin updated: 1 ports\nI0731 08:05:59.706421       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-provisioner updated: 1 ports\nI0731 08:05:59.919030       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-resizer updated: 1 ports\nI0731 08:06:00.137512       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:06:00.278148       1 service.go:421] Adding new service port \"volume-expand-1672-2530/csi-hostpathplugin:dummy\" at 100.65.229.13:12345/TCP\nI0731 08:06:00.278202       1 service.go:421] Adding new service port \"volume-expand-1672-2530/csi-hostpath-provisioner:dummy\" at 100.69.145.96:12345/TCP\nI0731 08:06:00.278214       1 service.go:421] Adding new service port \"volume-expand-1672-2530/csi-hostpath-resizer:dummy\" at 100.68.86.20:12345/TCP\nI0731 08:06:00.278224       1 service.go:421] Adding new service port \"volume-expand-1672-2530/csi-hostpath-snapshotter:dummy\" at 100.68.120.24:12345/TCP\nI0731 08:06:00.278299       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:00.372208       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"94.076907ms\"\nI0731 08:06:02.702964       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:02.811007       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"108.086503ms\"\nI0731 08:06:02.811261       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:02.896892       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"85.710066ms\"\nI0731 08:06:03.736238       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:03.779098       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.970183ms\"\nI0731 08:06:04.779385       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:04.823345       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.051136ms\"\nI0731 08:06:05.823588       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:05.860596       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.103089ms\"\nI0731 08:06:07.004147       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:07.045161       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.066805ms\"\nI0731 08:06:08.114441       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:08.149904       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.503419ms\"\nI0731 08:06:08.913711       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:08.963612       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.033779ms\"\nI0731 08:06:09.964301       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:09.996919       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.699337ms\"\nI0731 08:06:10.997997       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:11.049763       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.864507ms\"\nI0731 08:06:12.111528       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:12.150020       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.568639ms\"\nI0731 08:06:13.150707       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:13.191077       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.450509ms\"\nI0731 08:06:14.192071       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:14.223975       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.990457ms\"\nI0731 08:06:14.714685       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:14.749509       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.884717ms\"\nI0731 08:06:15.334874       1 service.go:306] Service pods-76/fooservice updated: 1 ports\nI0731 08:06:15.749740       1 service.go:421] Adding new service port \"pods-76/fooservice\" at 100.67.36.199:8765/TCP\nI0731 08:06:15.749891       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:15.820349       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.652553ms\"\nI0731 08:06:16.820664       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:16.867884       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.320819ms\"\nI0731 08:06:18.114850       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:18.169559       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.874886ms\"\nI0731 08:06:23.317760       1 service.go:306] Service pods-76/fooservice updated: 0 ports\nI0731 08:06:23.317807       1 service.go:446] Removing service port \"pods-76/fooservice\"\nI0731 08:06:23.317876       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:23.353902       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.083512ms\"\nI0731 08:06:23.354006       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:23.389034       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.092787ms\"\nI0731 08:06:24.389941       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:24.427206       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.340412ms\"\nI0731 08:06:25.427583       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:25.494123       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.620747ms\"\nI0731 08:06:26.711539       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:26.787200       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"75.713193ms\"\nI0731 08:06:27.787940       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:27.838835       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.021294ms\"\nI0731 08:06:28.582967       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:28.625280       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.404352ms\"\nI0731 08:06:28.689213       1 service.go:306] Service services-3210/sourceip-test updated: 0 ports\nI0731 08:06:29.625754       1 service.go:446] Removing service port \"services-3210/sourceip-test\"\nI0731 08:06:29.625863       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:29.643979       1 service.go:306] Service services-9996/test-service-mkgh6 updated: 1 ports\nI0731 08:06:29.665193       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.444067ms\"\nI0731 08:06:29.951149       1 service.go:306] Service services-9996/test-service-mkgh6 updated: 1 ports\nI0731 08:06:30.256476       1 service.go:306] Service services-9996/test-service-mkgh6 updated: 1 ports\nI0731 08:06:30.464409       1 service.go:306] Service services-9996/test-service-mkgh6 updated: 1 ports\nI0731 08:06:30.464457       1 service.go:421] Adding new service port \"services-9996/test-service-mkgh6:http\" at 100.71.77.225:80/TCP\nI0731 08:06:30.464626       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:30.506439       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.978156ms\"\nI0731 08:06:30.668035       1 service.go:306] Service services-9996/test-service-mkgh6 updated: 0 ports\nI0731 08:06:31.506601       1 service.go:446] Removing service port \"services-9996/test-service-mkgh6:http\"\nI0731 08:06:31.506745       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:31.541472       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.87493ms\"\nI0731 08:06:32.542048       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:32.592902       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.058114ms\"\nI0731 08:06:33.593967       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:33.666418       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.527172ms\"\nI0731 08:06:34.667182       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:34.724502       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.42589ms\"\nI0731 08:06:48.579862       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-attacher updated: 0 ports\nI0731 08:06:48.579899       1 service.go:446] Removing service port \"volume-expand-1672-2530/csi-hostpath-attacher:dummy\"\nI0731 08:06:48.579974       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:48.629525       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.608454ms\"\nI0731 08:06:48.629628       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:48.673248       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.677397ms\"\nI0731 08:06:48.900241       1 service.go:306] Service volume-expand-1672-2530/csi-hostpathplugin updated: 0 ports\nI0731 08:06:49.137101       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-provisioner updated: 0 ports\nI0731 08:06:49.352134       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-resizer updated: 0 ports\nI0731 08:06:49.565260       1 service.go:306] Service volume-expand-1672-2530/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:06:49.674214       1 service.go:446] Removing service port \"volume-expand-1672-2530/csi-hostpathplugin:dummy\"\nI0731 08:06:49.674258       1 service.go:446] Removing service port \"volume-expand-1672-2530/csi-hostpath-provisioner:dummy\"\nI0731 08:06:49.674268       1 service.go:446] Removing service port \"volume-expand-1672-2530/csi-hostpath-resizer:dummy\"\nI0731 08:06:49.674276       1 service.go:446] Removing service port \"volume-expand-1672-2530/csi-hostpath-snapshotter:dummy\"\nI0731 08:06:49.674390       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:49.753170       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"78.956686ms\"\nI0731 08:06:57.760163       1 service.go:306] Service services-7709/nodeport-collision-1 updated: 1 ports\nI0731 08:06:57.760209       1 service.go:421] Adding new service port \"services-7709/nodeport-collision-1\" at 100.68.182.77:80/TCP\nI0731 08:06:57.760272       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:57.789240       1 proxier.go:1289] \"Opened local port\" port=\"\\\"nodePort for services-7709/nodeport-collision-1\\\" (:30331/tcp4)\"\nI0731 08:06:57.797855       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.627348ms\"\nI0731 08:06:57.798039       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:57.862089       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.122851ms\"\nI0731 08:06:57.997968       1 service.go:306] Service services-7709/nodeport-collision-1 updated: 0 ports\nI0731 08:06:58.119002       1 service.go:306] Service services-7709/nodeport-collision-2 updated: 1 ports\nI0731 08:06:58.862375       1 service.go:446] Removing service port \"services-7709/nodeport-collision-1\"\nI0731 08:06:58.862485       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:58.919119       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"56.749054ms\"\nI0731 08:06:59.834844       1 service.go:306] Service services-1777/up-down-1 updated: 1 ports\nI0731 08:06:59.834895       1 service.go:421] Adding new service port \"services-1777/up-down-1\" at 100.65.160.20:80/TCP\nI0731 08:06:59.834969       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:06:59.886302       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.398427ms\"\nI0731 08:07:00.887046       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:00.922926       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.940125ms\"\nI0731 08:07:01.924126       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:01.957585       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.544257ms\"\nI0731 08:07:03.255088       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:03.315856       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"60.852489ms\"\nI0731 08:07:09.115830       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:09.175305       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.63491ms\"\nI0731 08:07:10.001649       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-attacher updated: 0 ports\nI0731 08:07:10.001705       1 service.go:446] Removing service port \"ephemeral-5708-8271/csi-hostpath-attacher:dummy\"\nI0731 08:07:10.001779       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:10.112050       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"110.352642ms\"\nI0731 08:07:10.322458       1 service.go:306] Service ephemeral-5708-8271/csi-hostpathplugin updated: 0 ports\nI0731 08:07:10.322503       1 service.go:446] Removing service port \"ephemeral-5708-8271/csi-hostpathplugin:dummy\"\nI0731 08:07:10.322741       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:10.354529       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.016117ms\"\nI0731 08:07:10.552814       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-provisioner updated: 0 ports\nI0731 08:07:10.764519       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-resizer updated: 0 ports\nI0731 08:07:11.024706       1 service.go:306] Service ephemeral-5708-8271/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:07:11.354671       1 service.go:446] Removing service port \"ephemeral-5708-8271/csi-hostpath-provisioner:dummy\"\nI0731 08:07:11.354742       1 service.go:446] Removing service port \"ephemeral-5708-8271/csi-hostpath-resizer:dummy\"\nI0731 08:07:11.354752       1 service.go:446] Removing service port \"ephemeral-5708-8271/csi-hostpath-snapshotter:dummy\"\nI0731 08:07:11.354866       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:11.387194       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.528135ms\"\nI0731 08:07:12.296123       1 service.go:306] Service services-1777/up-down-2 updated: 1 ports\nI0731 08:07:12.296165       1 service.go:421] Adding new service port \"services-1777/up-down-2\" at 100.67.248.203:80/TCP\nI0731 08:07:12.296535       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:12.333029       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.860035ms\"\nI0731 08:07:13.334938       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:13.420883       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.005221ms\"\nI0731 08:07:14.421834       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:14.466859       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.115788ms\"\nI0731 08:07:16.339460       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:16.381010       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.60715ms\"\nI0731 08:07:18.880398       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-attacher updated: 1 ports\nI0731 08:07:18.880440       1 service.go:421] Adding new service port \"ephemeral-8460-1741/csi-hostpath-attacher:dummy\" at 100.64.242.153:12345/TCP\nI0731 08:07:18.880582       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:18.931135       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.685227ms\"\nI0731 08:07:18.931230       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:18.997785       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"66.602821ms\"\nI0731 08:07:19.205787       1 service.go:306] Service ephemeral-8460-1741/csi-hostpathplugin updated: 1 ports\nI0731 08:07:19.418350       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-provisioner updated: 1 ports\nI0731 08:07:19.630481       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-resizer updated: 1 ports\nI0731 08:07:19.839865       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:07:19.998299       1 service.go:421] Adding new service port \"ephemeral-8460-1741/csi-hostpath-resizer:dummy\" at 100.66.21.66:12345/TCP\nI0731 08:07:19.998353       1 service.go:421] Adding new service port \"ephemeral-8460-1741/csi-hostpath-snapshotter:dummy\" at 100.71.9.81:12345/TCP\nI0731 08:07:19.998367       1 service.go:421] Adding new service port \"ephemeral-8460-1741/csi-hostpathplugin:dummy\" at 100.70.82.133:12345/TCP\nI0731 08:07:19.998378       1 service.go:421] Adding new service port \"ephemeral-8460-1741/csi-hostpath-provisioner:dummy\" at 100.67.13.233:12345/TCP\nI0731 08:07:19.998464       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:20.046466       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.176267ms\"\nI0731 08:07:25.650511       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:25.705973       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.551385ms\"\nI0731 08:07:26.046814       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:26.088870       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.111869ms\"\nI0731 08:07:26.849361       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:26.882060       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.754923ms\"\nI0731 08:07:27.882732       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:27.926835       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.294811ms\"\nI0731 08:07:28.927907       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:28.964213       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.396241ms\"\nI0731 08:07:36.499046       1 service.go:306] Service provisioning-6691-320/csi-hostpath-attacher updated: 1 ports\nI0731 08:07:36.499089       1 service.go:421] Adding new service port \"provisioning-6691-320/csi-hostpath-attacher:dummy\" at 100.68.225.141:12345/TCP\nI0731 08:07:36.499192       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:36.535720       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.622272ms\"\nI0731 08:07:36.535918       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:36.568478       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.711684ms\"\nI0731 08:07:36.812218       1 service.go:306] Service provisioning-6691-320/csi-hostpathplugin updated: 1 ports\nI0731 08:07:37.023517       1 service.go:306] Service provisioning-6691-320/csi-hostpath-provisioner updated: 1 ports\nI0731 08:07:37.239233       1 service.go:306] Service provisioning-6691-320/csi-hostpath-resizer updated: 1 ports\nI0731 08:07:37.452117       1 service.go:306] Service provisioning-6691-320/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:07:37.569601       1 service.go:421] Adding new service port \"provisioning-6691-320/csi-hostpathplugin:dummy\" at 100.65.131.15:12345/TCP\nI0731 08:07:37.569635       1 service.go:421] Adding new service port \"provisioning-6691-320/csi-hostpath-provisioner:dummy\" at 100.65.191.199:12345/TCP\nI0731 08:07:37.569648       1 service.go:421] Adding new service port \"provisioning-6691-320/csi-hostpath-resizer:dummy\" at 100.69.2.227:12345/TCP\nI0731 08:07:37.569658       1 service.go:421] Adding new service port \"provisioning-6691-320/csi-hostpath-snapshotter:dummy\" at 100.65.141.98:12345/TCP\nI0731 08:07:37.569762       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:37.634378       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.787372ms\"\nI0731 08:07:39.599600       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:39.676466       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"76.938684ms\"\nI0731 08:07:39.833547       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:39.926008       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"92.530287ms\"\nI0731 08:07:40.828157       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:40.866324       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.219559ms\"\nI0731 08:07:43.439876       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:43.476332       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"36.514466ms\"\nI0731 08:07:44.239107       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:44.389944       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"150.882201ms\"\nI0731 08:07:44.733170       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:44.786215       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"53.083859ms\"\nI0731 08:07:45.787367       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:45.833665       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.366451ms\"\nI0731 08:07:47.564759       1 service.go:306] Service volumemode-262-2962/csi-hostpath-attacher updated: 1 ports\nI0731 08:07:47.564807       1 service.go:421] Adding new service port \"volumemode-262-2962/csi-hostpath-attacher:dummy\" at 100.66.193.192:12345/TCP\nI0731 08:07:47.564877       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:47.629834       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.017164ms\"\nI0731 08:07:47.629939       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:47.716217       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"86.333269ms\"\nI0731 08:07:47.881521       1 service.go:306] Service volumemode-262-2962/csi-hostpathplugin updated: 1 ports\nI0731 08:07:48.102428       1 service.go:306] Service volumemode-262-2962/csi-hostpath-provisioner updated: 1 ports\nI0731 08:07:48.316897       1 service.go:306] Service volumemode-262-2962/csi-hostpath-resizer updated: 1 ports\nI0731 08:07:48.579191       1 service.go:306] Service volumemode-262-2962/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:07:48.579238       1 service.go:421] Adding new service port \"volumemode-262-2962/csi-hostpathplugin:dummy\" at 100.69.40.48:12345/TCP\nI0731 08:07:48.579256       1 service.go:421] Adding new service port \"volumemode-262-2962/csi-hostpath-provisioner:dummy\" at 100.65.35.42:12345/TCP\nI0731 08:07:48.579266       1 service.go:421] Adding new service port \"volumemode-262-2962/csi-hostpath-resizer:dummy\" at 100.65.40.16:12345/TCP\nI0731 08:07:48.579275       1 service.go:421] Adding new service port \"volumemode-262-2962/csi-hostpath-snapshotter:dummy\" at 100.65.28.146:12345/TCP\nI0731 08:07:48.579358       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:48.634473       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"55.207382ms\"\nI0731 08:07:49.635562       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:49.668870       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.369893ms\"\nI0731 08:07:53.847965       1 service.go:306] Service dns-2241/test-service-2 updated: 1 ports\nI0731 08:07:53.848014       1 service.go:421] Adding new service port \"dns-2241/test-service-2:http\" at 100.67.111.6:80/TCP\nI0731 08:07:53.848093       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:53.894362       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.341512ms\"\nI0731 08:07:53.894463       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:53.935911       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"41.5057ms\"\nI0731 08:07:55.824331       1 service.go:306] Service services-1777/up-down-1 updated: 0 ports\nI0731 08:07:55.824507       1 service.go:446] Removing service port \"services-1777/up-down-1\"\nI0731 08:07:55.824614       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:55.862104       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.590534ms\"\nI0731 08:07:55.862199       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:55.899849       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.703772ms\"\nI0731 08:07:56.842860       1 service.go:306] Service endpointslicemirroring-6544/example-custom-endpoints updated: 1 ports\nI0731 08:07:56.900908       1 service.go:421] Adding new service port \"endpointslicemirroring-6544/example-custom-endpoints:example\" at 100.67.119.72:80/TCP\nI0731 08:07:56.901054       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:56.939505       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.674731ms\"\nI0731 08:07:57.939863       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:57.978843       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.082025ms\"\nI0731 08:07:58.979068       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:07:59.022035       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.047649ms\"\nI0731 08:08:00.022844       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:00.187568       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"164.900497ms\"\nI0731 08:08:02.758927       1 service.go:306] Service endpointslicemirroring-6544/example-custom-endpoints updated: 0 ports\nI0731 08:08:02.758969       1 service.go:446] Removing service port \"endpointslicemirroring-6544/example-custom-endpoints:example\"\nI0731 08:08:02.759052       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:02.829970       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"70.989251ms\"\nI0731 08:08:03.055041       1 service.go:306] Service provisioning-286-7336/csi-hostpath-attacher updated: 1 ports\nI0731 08:08:03.055145       1 service.go:421] Adding new service port \"provisioning-286-7336/csi-hostpath-attacher:dummy\" at 100.64.116.226:12345/TCP\nI0731 08:08:03.055232       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:03.100550       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.456086ms\"\nI0731 08:08:03.367046       1 service.go:306] Service provisioning-286-7336/csi-hostpathplugin updated: 1 ports\nI0731 08:08:03.640169       1 service.go:306] Service provisioning-286-7336/csi-hostpath-provisioner updated: 1 ports\nI0731 08:08:03.788020       1 service.go:306] Service provisioning-286-7336/csi-hostpath-resizer updated: 1 ports\nI0731 08:08:03.788066       1 service.go:421] Adding new service port \"provisioning-286-7336/csi-hostpathplugin:dummy\" at 100.68.116.48:12345/TCP\nI0731 08:08:03.788082       1 service.go:421] Adding new service port \"provisioning-286-7336/csi-hostpath-provisioner:dummy\" at 100.67.99.205:12345/TCP\nI0731 08:08:03.788094       1 service.go:421] Adding new service port \"provisioning-286-7336/csi-hostpath-resizer:dummy\" at 100.70.255.12:12345/TCP\nI0731 08:08:03.788172       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:03.849670       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.593658ms\"\nI0731 08:08:03.999976       1 service.go:306] Service provisioning-286-7336/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:08:04.850865       1 service.go:421] Adding new service port \"provisioning-286-7336/csi-hostpath-snapshotter:dummy\" at 100.64.173.139:12345/TCP\nI0731 08:08:04.851018       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:04.916318       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"65.512706ms\"\nI0731 08:08:09.148360       1 service.go:306] Service services-1777/up-down-3 updated: 1 ports\nI0731 08:08:09.148410       1 service.go:421] Adding new service port \"services-1777/up-down-3\" at 100.70.70.197:80/TCP\nI0731 08:08:09.148725       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:09.186076       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.661513ms\"\nI0731 08:08:09.186769       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:09.224907       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.789134ms\"\nI0731 08:08:10.369021       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:10.406015       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.05823ms\"\nI0731 08:08:10.612158       1 service.go:306] Service endpointslice-965/example-int-port updated: 1 ports\nI0731 08:08:10.719776       1 service.go:306] Service endpointslice-965/example-named-port updated: 1 ports\nI0731 08:08:10.827443       1 service.go:306] Service endpointslice-965/example-no-match updated: 1 ports\nI0731 08:08:11.407117       1 service.go:421] Adding new service port \"endpointslice-965/example-named-port:http\" at 100.66.245.182:80/TCP\nI0731 08:08:11.407179       1 service.go:421] Adding new service port \"endpointslice-965/example-no-match:example-no-match\" at 100.69.124.178:80/TCP\nI0731 08:08:11.407191       1 service.go:421] Adding new service port \"endpointslice-965/example-int-port:example\" at 100.67.17.216:80/TCP\nI0731 08:08:11.407313       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:11.442788       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.688979ms\"\nI0731 08:08:12.312361       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-attacher updated: 0 ports\nI0731 08:08:12.312399       1 service.go:446] Removing service port \"ephemeral-8460-1741/csi-hostpath-attacher:dummy\"\nI0731 08:08:12.312597       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:12.351890       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.479776ms\"\nI0731 08:08:12.653113       1 service.go:306] Service ephemeral-8460-1741/csi-hostpathplugin updated: 0 ports\nI0731 08:08:12.868509       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-provisioner updated: 0 ports\nI0731 08:08:13.094007       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-resizer updated: 0 ports\nI0731 08:08:13.306019       1 service.go:306] Service ephemeral-8460-1741/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:08:13.306058       1 service.go:446] Removing service port \"ephemeral-8460-1741/csi-hostpath-resizer:dummy\"\nI0731 08:08:13.306074       1 service.go:446] Removing service port \"ephemeral-8460-1741/csi-hostpath-snapshotter:dummy\"\nI0731 08:08:13.306082       1 service.go:446] Removing service port \"ephemeral-8460-1741/csi-hostpathplugin:dummy\"\nI0731 08:08:13.306092       1 service.go:446] Removing service port \"ephemeral-8460-1741/csi-hostpath-provisioner:dummy\"\nI0731 08:08:13.306196       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:13.345850       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"39.782261ms\"\nI0731 08:08:14.346115       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:14.381907       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.914353ms\"\nI0731 08:08:15.382928       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:15.445197       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.381132ms\"\nI0731 08:08:16.270940       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:16.306226       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.364906ms\"\nI0731 08:08:18.079230       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:18.117035       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.940587ms\"\nI0731 08:08:23.406893       1 service.go:306] Service provisioning-6691-320/csi-hostpath-attacher updated: 0 ports\nI0731 08:08:23.406930       1 service.go:446] Removing service port \"provisioning-6691-320/csi-hostpath-attacher:dummy\"\nI0731 08:08:23.407025       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:23.442754       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.813252ms\"\nI0731 08:08:23.445046       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:23.480192       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"35.207018ms\"\nI0731 08:08:23.730858       1 service.go:306] Service provisioning-6691-320/csi-hostpathplugin updated: 0 ports\nI0731 08:08:23.951678       1 service.go:306] Service provisioning-6691-320/csi-hostpath-provisioner updated: 0 ports\nI0731 08:08:24.176491       1 service.go:306] Service provisioning-6691-320/csi-hostpath-resizer updated: 0 ports\nI0731 08:08:24.391487       1 service.go:306] Service provisioning-6691-320/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:08:24.480750       1 service.go:446] Removing service port \"provisioning-6691-320/csi-hostpath-snapshotter:dummy\"\nI0731 08:08:24.480785       1 service.go:446] Removing service port \"provisioning-6691-320/csi-hostpathplugin:dummy\"\nI0731 08:08:24.480795       1 service.go:446] Removing service port \"provisioning-6691-320/csi-hostpath-provisioner:dummy\"\nI0731 08:08:24.480805       1 service.go:446] Removing service port \"provisioning-6691-320/csi-hostpath-resizer:dummy\"\nI0731 08:08:24.480940       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:24.550190       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.442186ms\"\nI0731 08:08:28.366334       1 service.go:306] Service services-8650/hairpin-test updated: 1 ports\nI0731 08:08:28.366378       1 service.go:421] Adding new service port \"services-8650/hairpin-test\" at 100.67.79.18:8080/TCP\nI0731 08:08:28.366548       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:28.410032       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"43.646794ms\"\nI0731 08:08:28.410265       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:28.455090       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.002107ms\"\nI0731 08:08:29.878229       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:29.920278       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.116602ms\"\nI0731 08:08:31.878222       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:31.927088       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"48.941031ms\"\nI0731 08:08:31.981864       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:32.034731       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.89712ms\"\nI0731 08:08:32.317436       1 service.go:306] Service dns-2241/test-service-2 updated: 0 ports\nI0731 08:08:32.883100       1 service.go:446] Removing service port \"dns-2241/test-service-2:http\"\nI0731 08:08:32.883296       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:32.929053       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.945124ms\"\nI0731 08:08:33.778991       1 service.go:306] Service volumemode-262-2962/csi-hostpath-attacher updated: 0 ports\nI0731 08:08:33.929704       1 service.go:446] Removing service port \"volumemode-262-2962/csi-hostpath-attacher:dummy\"\nI0731 08:08:33.930320       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:33.998607       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"69.007938ms\"\nI0731 08:08:34.107539       1 service.go:306] Service volumemode-262-2962/csi-hostpathplugin updated: 0 ports\nI0731 08:08:34.333711       1 service.go:306] Service volumemode-262-2962/csi-hostpath-provisioner updated: 0 ports\nI0731 08:08:34.557022       1 service.go:306] Service volumemode-262-2962/csi-hostpath-resizer updated: 0 ports\nI0731 08:08:34.777288       1 service.go:306] Service volumemode-262-2962/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:08:34.999373       1 service.go:446] Removing service port \"volumemode-262-2962/csi-hostpathplugin:dummy\"\nI0731 08:08:34.999427       1 service.go:446] Removing service port \"volumemode-262-2962/csi-hostpath-provisioner:dummy\"\nI0731 08:08:34.999438       1 service.go:446] Removing service port \"volumemode-262-2962/csi-hostpath-resizer:dummy\"\nI0731 08:08:34.999447       1 service.go:446] Removing service port \"volumemode-262-2962/csi-hostpath-snapshotter:dummy\"\nI0731 08:08:34.999551       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:35.039791       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.416814ms\"\nI0731 08:08:39.808942       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:39.834915       1 service.go:306] Service services-8650/hairpin-test updated: 0 ports\nI0731 08:08:39.873272       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.412365ms\"\nI0731 08:08:39.873306       1 service.go:446] Removing service port \"services-8650/hairpin-test\"\nI0731 08:08:39.873401       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:39.935120       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.801387ms\"\nI0731 08:08:47.607970       1 service.go:306] Service endpointslice-965/example-int-port updated: 0 ports\nI0731 08:08:47.608009       1 service.go:446] Removing service port \"endpointslice-965/example-int-port:example\"\nI0731 08:08:47.608104       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:47.624467       1 service.go:306] Service endpointslice-965/example-named-port updated: 0 ports\nI0731 08:08:47.636102       1 service.go:306] Service endpointslice-965/example-no-match updated: 0 ports\nI0731 08:08:47.657271       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.247298ms\"\nI0731 08:08:47.657301       1 service.go:446] Removing service port \"endpointslice-965/example-named-port:http\"\nI0731 08:08:47.657314       1 service.go:446] Removing service port \"endpointslice-965/example-no-match:example-no-match\"\nI0731 08:08:47.657425       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:47.708335       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"51.02607ms\"\nI0731 08:08:48.708882       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:48.747177       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.349803ms\"\nI0731 08:08:48.827846       1 service.go:306] Service services-3877/clusterip-service updated: 1 ports\nI0731 08:08:48.936808       1 service.go:306] Service services-3877/externalsvc updated: 1 ports\nI0731 08:08:49.078988       1 service.go:306] Service services-1777/up-down-2 updated: 0 ports\nI0731 08:08:49.092101       1 service.go:306] Service services-1777/up-down-3 updated: 0 ports\nI0731 08:08:49.747552       1 service.go:421] Adding new service port \"services-3877/externalsvc\" at 100.70.217.86:80/TCP\nI0731 08:08:49.747598       1 service.go:446] Removing service port \"services-1777/up-down-2\"\nI0731 08:08:49.747610       1 service.go:446] Removing service port \"services-1777/up-down-3\"\nI0731 08:08:49.747625       1 service.go:421] Adding new service port \"services-3877/clusterip-service\" at 100.71.25.155:80/TCP\nI0731 08:08:49.747814       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:49.811813       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"64.459367ms\"\nI0731 08:08:52.541972       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:52.576034       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.113168ms\"\nI0731 08:08:53.378527       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:53.423973       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.52615ms\"\nI0731 08:08:55.504982       1 service.go:306] Service services-3877/clusterip-service updated: 0 ports\nI0731 08:08:55.505031       1 service.go:446] Removing service port \"services-3877/clusterip-service\"\nI0731 08:08:55.505108       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:55.593146       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.066938ms\"\nI0731 08:08:55.593270       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:08:55.702559       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"109.368555ms\"\nI0731 08:09:03.585547       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:03.644845       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"59.364186ms\"\nI0731 08:09:03.644954       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:03.682340       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.455525ms\"\nI0731 08:09:10.466305       1 service.go:306] Service services-2342/tolerate-unready updated: 1 ports\nI0731 08:09:10.466357       1 service.go:421] Adding new service port \"services-2342/tolerate-unready:http\" at 100.65.244.19:80/TCP\nI0731 08:09:10.466433       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:10.506926       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.540092ms\"\nI0731 08:09:10.507023       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:10.539247       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.278921ms\"\nI0731 08:09:11.823583       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:11.871240       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.72857ms\"\nI0731 08:09:12.359636       1 service.go:306] Service apply-3330/test-svc updated: 1 ports\nI0731 08:09:12.564667       1 service.go:306] Service apply-3330/test-svc updated: 1 ports\nI0731 08:09:12.564733       1 service.go:421] Adding new service port \"apply-3330/test-svc\" at 100.70.23.237:8080/UDP\nI0731 08:09:12.564935       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:12.609834       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.092191ms\"\nI0731 08:09:15.102864       1 service.go:306] Service services-3877/externalsvc updated: 0 ports\nI0731 08:09:15.102901       1 service.go:446] Removing service port \"services-3877/externalsvc\"\nI0731 08:09:15.102980       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:15.155869       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"52.949836ms\"\nI0731 08:09:15.249246       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:15.294641       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.467239ms\"\nI0731 08:09:16.935148       1 service.go:306] Service webhook-8835/e2e-test-webhook updated: 1 ports\nI0731 08:09:16.935192       1 service.go:421] Adding new service port \"webhook-8835/e2e-test-webhook\" at 100.65.222.213:8443/TCP\nI0731 08:09:16.935275       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:16.982266       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.059838ms\"\nI0731 08:09:17.530100       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:17.603056       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"73.02552ms\"\nI0731 08:09:17.895713       1 service.go:306] Service apply-3330/test-svc updated: 0 ports\nI0731 08:09:18.603201       1 service.go:446] Removing service port \"apply-3330/test-svc\"\nI0731 08:09:18.603320       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:18.665640       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"62.442722ms\"\nI0731 08:09:18.889958       1 service.go:306] Service webhook-8835/e2e-test-webhook updated: 0 ports\nI0731 08:09:19.276961       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-attacher updated: 1 ports\nI0731 08:09:19.276999       1 service.go:446] Removing service port \"webhook-8835/e2e-test-webhook\"\nI0731 08:09:19.277021       1 service.go:421] Adding new service port \"ephemeral-4656-5706/csi-hostpath-attacher:dummy\" at 100.68.112.198:12345/TCP\nI0731 08:09:19.277101       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:19.323032       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"46.016173ms\"\nI0731 08:09:19.605011       1 service.go:306] Service ephemeral-4656-5706/csi-hostpathplugin updated: 1 ports\nI0731 08:09:19.821886       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-provisioner updated: 1 ports\nI0731 08:09:20.035489       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-resizer updated: 1 ports\nI0731 08:09:20.251085       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:09:20.251132       1 service.go:421] Adding new service port \"ephemeral-4656-5706/csi-hostpathplugin:dummy\" at 100.64.22.229:12345/TCP\nI0731 08:09:20.251149       1 service.go:421] Adding new service port \"ephemeral-4656-5706/csi-hostpath-provisioner:dummy\" at 100.67.28.13:12345/TCP\nI0731 08:09:20.251164       1 service.go:421] Adding new service port \"ephemeral-4656-5706/csi-hostpath-resizer:dummy\" at 100.71.156.221:12345/TCP\nI0731 08:09:20.251175       1 service.go:421] Adding new service port \"ephemeral-4656-5706/csi-hostpath-snapshotter:dummy\" at 100.67.138.216:12345/TCP\nI0731 08:09:20.251266       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:20.312793       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"61.647861ms\"\nI0731 08:09:20.899423       1 service.go:306] Service services-2342/tolerate-unready updated: 0 ports\nI0731 08:09:21.313582       1 service.go:446] Removing service port \"services-2342/tolerate-unready:http\"\nI0731 08:09:21.313762       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:21.345556       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.981799ms\"\nI0731 08:09:22.598211       1 service.go:306] Service provisioning-286-7336/csi-hostpath-attacher updated: 0 ports\nI0731 08:09:22.598253       1 service.go:446] Removing service port \"provisioning-286-7336/csi-hostpath-attacher:dummy\"\nI0731 08:09:22.598343       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:22.683552       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"85.286865ms\"\nI0731 08:09:22.868395       1 service.go:306] Service ephemeral-9829-2155/csi-hostpath-attacher updated: 1 ports\nI0731 08:09:22.927286       1 service.go:306] Service provisioning-286-7336/csi-hostpathplugin updated: 0 ports\nI0731 08:09:23.142586       1 service.go:306] Service provisioning-286-7336/csi-hostpath-provisioner updated: 0 ports\nI0731 08:09:23.142630       1 service.go:421] Adding new service port \"ephemeral-9829-2155/csi-hostpath-attacher:dummy\" at 100.67.140.46:12345/TCP\nI0731 08:09:23.142646       1 service.go:446] Removing service port \"provisioning-286-7336/csi-hostpathplugin:dummy\"\nI0731 08:09:23.142655       1 service.go:446] Removing service port \"provisioning-286-7336/csi-hostpath-provisioner:dummy\"\nI0731 08:09:23.142792       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:23.181833       1 service.go:306] Service ephemeral-9829-2155/csi-hostpathplugin updated: 1 ports\nI0731 08:09:23.185378       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"42.741944ms\"\nI0731 08:09:23.361092       1 service.go:306] Service provisioning-286-7336/csi-hostpath-resizer updated: 0 ports\nI0731 08:09:23.401133       1 service.go:306] Service ephemeral-9829-2155/csi-hostpath-provisioner updated: 1 ports\nI0731 08:09:23.588885       1 service.go:306] Service provisioning-286-7336/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:09:23.624624       1 service.go:306] Service ephemeral-9829-2155/csi-hostpath-resizer updated: 1 ports\nI0731 08:09:23.837931       1 service.go:306] Service ephemeral-9829-2155/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:09:24.186302       1 service.go:446] Removing service port \"provisioning-286-7336/csi-hostpath-resizer:dummy\"\nI0731 08:09:24.186375       1 service.go:421] Adding new service port \"ephemeral-9829-2155/csi-hostpath-provisioner:dummy\" at 100.71.187.171:12345/TCP\nI0731 08:09:24.186386       1 service.go:446] Removing service port \"provisioning-286-7336/csi-hostpath-snapshotter:dummy\"\nI0731 08:09:24.186401       1 service.go:421] Adding new service port \"ephemeral-9829-2155/csi-hostpath-resizer:dummy\" at 100.67.116.54:12345/TCP\nI0731 08:09:24.186415       1 service.go:421] Adding new service port \"ephemeral-9829-2155/csi-hostpath-snapshotter:dummy\" at 100.68.168.212:12345/TCP\nI0731 08:09:24.186428       1 service.go:421] Adding new service port \"ephemeral-9829-2155/csi-hostpathplugin:dummy\" at 100.65.147.46:12345/TCP\nI0731 08:09:24.186552       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:24.219866       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.572546ms\"\nI0731 08:09:28.488553       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:28.520628       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.11182ms\"\nI0731 08:09:29.085517       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:29.125567       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.101934ms\"\nI0731 08:09:30.125853       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:30.163359       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.577103ms\"\nI0731 08:09:30.686166       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:30.731100       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"44.96725ms\"\nI0731 08:09:31.731487       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:31.781137       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"49.750555ms\"\nI0731 08:09:32.639515       1 service.go:306] Service volume-8375-8198/csi-hostpath-attacher updated: 0 ports\nI0731 08:09:32.639556       1 service.go:446] Removing service port \"volume-8375-8198/csi-hostpath-attacher:dummy\"\nI0731 08:09:32.639658       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:32.680513       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"40.938137ms\"\nI0731 08:09:32.965134       1 service.go:306] Service volume-8375-8198/csi-hostpathplugin updated: 0 ports\nI0731 08:09:33.178228       1 service.go:306] Service volume-8375-8198/csi-hostpath-provisioner updated: 0 ports\nI0731 08:09:33.400251       1 service.go:306] Service volume-8375-8198/csi-hostpath-resizer updated: 0 ports\nI0731 08:09:33.618956       1 service.go:306] Service volume-8375-8198/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:09:33.618999       1 service.go:446] Removing service port \"volume-8375-8198/csi-hostpathplugin:dummy\"\nI0731 08:09:33.619015       1 service.go:446] Removing service port \"volume-8375-8198/csi-hostpath-provisioner:dummy\"\nI0731 08:09:33.619024       1 service.go:446] Removing service port \"volume-8375-8198/csi-hostpath-resizer:dummy\"\nI0731 08:09:33.619084       1 service.go:446] Removing service port \"volume-8375-8198/csi-hostpath-snapshotter:dummy\"\nI0731 08:09:33.619231       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:33.707354       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"88.339055ms\"\nI0731 08:09:34.707783       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:34.740600       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"32.928044ms\"\nI0731 08:09:51.806713       1 service.go:306] Service proxy-1140/proxy-service-vp6ld updated: 4 ports\nI0731 08:09:51.806756       1 service.go:421] Adding new service port \"proxy-1140/proxy-service-vp6ld:portname1\" at 100.64.244.164:80/TCP\nI0731 08:09:51.806769       1 service.go:421] Adding new service port \"proxy-1140/proxy-service-vp6ld:portname2\" at 100.64.244.164:81/TCP\nI0731 08:09:51.806778       1 service.go:421] Adding new service port \"proxy-1140/proxy-service-vp6ld:tlsportname1\" at 100.64.244.164:443/TCP\nI0731 08:09:51.806787       1 service.go:421] Adding new service port \"proxy-1140/proxy-service-vp6ld:tlsportname2\" at 100.64.244.164:444/TCP\nI0731 08:09:51.806856       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:51.879578       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"72.817315ms\"\nI0731 08:09:51.879813       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:51.911515       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.896106ms\"\nI0731 08:09:52.834778       1 service.go:306] Service ephemeral-3068-7957/csi-hostpath-attacher updated: 1 ports\nI0731 08:09:52.834819       1 service.go:421] Adding new service port \"ephemeral-3068-7957/csi-hostpath-attacher:dummy\" at 100.71.135.119:12345/TCP\nI0731 08:09:52.834888       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:52.868053       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"33.230316ms\"\nI0731 08:09:53.035869       1 service.go:306] Service volume-expand-877-447/csi-hostpath-attacher updated: 0 ports\nI0731 08:09:53.173543       1 service.go:306] Service ephemeral-3068-7957/csi-hostpathplugin updated: 1 ports\nI0731 08:09:53.363207       1 service.go:306] Service volume-expand-877-447/csi-hostpathplugin updated: 0 ports\nI0731 08:09:53.390890       1 service.go:306] Service ephemeral-3068-7957/csi-hostpath-provisioner updated: 1 ports\nI0731 08:09:53.575112       1 service.go:306] Service volume-expand-877-447/csi-hostpath-provisioner updated: 0 ports\nI0731 08:09:53.607847       1 service.go:306] Service ephemeral-3068-7957/csi-hostpath-resizer updated: 1 ports\nI0731 08:09:53.816425       1 service.go:306] Service volume-expand-877-447/csi-hostpath-resizer updated: 0 ports\nI0731 08:09:53.816501       1 service.go:446] Removing service port \"volume-expand-877-447/csi-hostpath-provisioner:dummy\"\nI0731 08:09:53.819928       1 service.go:421] Adding new service port \"ephemeral-3068-7957/csi-hostpath-resizer:dummy\" at 100.65.83.168:12345/TCP\nI0731 08:09:53.819960       1 service.go:446] Removing service port \"volume-expand-877-447/csi-hostpath-resizer:dummy\"\nI0731 08:09:53.819969       1 service.go:446] Removing service port \"volume-expand-877-447/csi-hostpath-attacher:dummy\"\nI0731 08:09:53.819980       1 service.go:421] Adding new service port \"ephemeral-3068-7957/csi-hostpathplugin:dummy\" at 100.66.191.163:12345/TCP\nI0731 08:09:53.819988       1 service.go:446] Removing service port \"volume-expand-877-447/csi-hostpathplugin:dummy\"\nI0731 08:09:53.820000       1 service.go:421] Adding new service port \"ephemeral-3068-7957/csi-hostpath-provisioner:dummy\" at 100.68.47.198:12345/TCP\nI0731 08:09:53.820191       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:53.835829       1 service.go:306] Service ephemeral-3068-7957/csi-hostpath-snapshotter updated: 1 ports\nI0731 08:09:53.953409       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"136.917849ms\"\nI0731 08:09:54.066623       1 service.go:306] Service volume-expand-877-447/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:09:54.953842       1 service.go:421] Adding new service port \"ephemeral-3068-7957/csi-hostpath-snapshotter:dummy\" at 100.65.133.123:12345/TCP\nI0731 08:09:54.953872       1 service.go:446] Removing service port \"volume-expand-877-447/csi-hostpath-snapshotter:dummy\"\nI0731 08:09:54.953993       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:55.008193       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"54.364675ms\"\nI0731 08:09:56.892231       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:56.950050       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"57.897699ms\"\nI0731 08:09:58.764483       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:58.810312       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"45.872898ms\"\nI0731 08:09:59.165370       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:09:59.196961       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"31.661307ms\"\nI0731 08:10:00.766682       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:00.817498       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"50.873818ms\"\nI0731 08:10:01.164256       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:01.202289       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"38.086361ms\"\nI0731 08:10:02.203184       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:02.309085       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"105.98231ms\"\nI0731 08:10:16.232737       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:16.280243       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"47.571012ms\"\nI0731 08:10:16.319893       1 service.go:306] Service proxy-1140/proxy-service-vp6ld updated: 0 ports\nI0731 08:10:16.319928       1 service.go:446] Removing service port \"proxy-1140/proxy-service-vp6ld:portname1\"\nI0731 08:10:16.320065       1 service.go:446] Removing service port \"proxy-1140/proxy-service-vp6ld:portname2\"\nI0731 08:10:16.320074       1 service.go:446] Removing service port \"proxy-1140/proxy-service-vp6ld:tlsportname1\"\nI0731 08:10:16.320081       1 service.go:446] Removing service port \"proxy-1140/proxy-service-vp6ld:tlsportname2\"\nI0731 08:10:16.320263       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:16.357134       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"37.196253ms\"\nI0731 08:10:17.138386       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-attacher updated: 0 ports\nI0731 08:10:17.357650       1 service.go:446] Removing service port \"ephemeral-4656-5706/csi-hostpath-attacher:dummy\"\nI0731 08:10:17.357788       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:17.391879       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"34.230666ms\"\nI0731 08:10:17.458110       1 service.go:306] Service ephemeral-4656-5706/csi-hostpathplugin updated: 0 ports\nI0731 08:10:17.689309       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-provisioner updated: 0 ports\nI0731 08:10:17.906024       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-resizer updated: 0 ports\nI0731 08:10:18.123837       1 service.go:306] Service ephemeral-4656-5706/csi-hostpath-snapshotter updated: 0 ports\nI0731 08:10:18.392812       1 service.go:446] Removing service port \"ephemeral-4656-5706/csi-hostpathplugin:dummy\"\nI0731 08:10:18.392870       1 service.go:446] Removing service port \"ephemeral-4656-5706/csi-hostpath-provisioner:dummy\"\nI0731 08:10:18.392882       1 service.go:446] Removing service port \"ephemeral-4656-5706/csi-hostpath-resizer:dummy\"\nI0731 08:10:18.392890       1 service.go:446] Removing service port \"ephemeral-4656-5706/csi-hostpath-snapshotter:dummy\"\nI0731 08:10:18.393012       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:18.461404       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"68.597181ms\"\nI0731 08:10:30.093028       1 service.go:306] Service ephemeral-9829-2155/csi-hostpath-attacher updated: 0 ports\nI0731 08:10:30.093076       1 service.go:446] Removing service port \"ephemeral-9829-2155/csi-hostpath-attacher:dummy\"\nI0731 08:10:30.093234       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:30.191994       1 proxier.go:824] \"syncProxyRules complete\" elapsed=\"98.892026ms\"\nI0731 08:10:30.192194       1 proxier.go:854] \"Syncing iptables rules\"\nI0731 08:10:30.278266       1 proxier.go:824] \"syncProx