This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Use helm's kubeclient to apply manifests
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-17 20:40
Elapsed50m47s
Revision5463875e0d5d2f96d736f95853c66dcd537cc48c
Refs 11712

No Test Failures!


Error lines from build-log.txt

... skipping 493 lines ...
Operation completed over 1 objects/115.0 B.                                      
I0717 20:45:24.265784    4232 copy.go:30] cp /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops /logs/artifacts/1f401ac2-e73f-11eb-8bcd-6e57da1f3753/kops
I0717 20:45:24.473090    4232 up.go:43] Cleaning up any leaked resources from previous cluster
I0717 20:45:24.473143    4232 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0717 20:45:24.513797   11692 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0717 20:45:24.513942   11692 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Error: Cluster.kops.k8s.io "e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io" not found

Cluster.kops.k8s.io "e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io" not found
W0717 20:45:25.069161    4232 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0717 20:45:25.069293    4232 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --yes
I0717 20:45:25.097722   11700 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0717 20:45:25.097864   11700 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Error: error reading cluster configuration: Cluster.kops.k8s.io "e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io" not found

error reading cluster configuration: Cluster.kops.k8s.io "e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io" not found
I0717 20:45:25.676364    4232 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/17 20:45:25 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0717 20:45:25.690614    4232 http.go:37] curl https://ip.jsb.workers.dev
I0717 20:45:25.800070    4232 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.21.3 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=amazonvpc --container-runtime=containerd --node-size=t3.large --admin-access 35.202.145.22/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-3a --master-size c5.large
I0717 20:45:25.836987   11711 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0717 20:45:25.837087   11711 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0717 20:45:25.913327   11711 create_cluster.go:826] Using SSH public key: /etc/aws-ssh/aws-ssh-public
I0717 20:45:26.418118   11711 new_cluster.go:1054]  Cloud Provider ID = aws
... skipping 41 lines ...

I0717 20:45:50.548802    4232 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0717 20:45:50.566210   11732 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0717 20:45:50.566302   11732 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
Validating cluster e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io

W0717 20:45:51.836723   11732 validate_cluster.go:184] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:46:01.869887   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:46:11.899418   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:46:21.940042   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:46:31.982176   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:46:42.024956   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:46:52.068103   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:47:02.100266   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:47:12.142046   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:47:22.169459   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:47:32.199822   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:47:42.231435   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:47:52.288845   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:48:02.317913   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:48:12.354969   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:48:22.384460   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:48:32.415751   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:48:42.444214   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:48:52.471691   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:49:02.505734   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0717 20:49:12.540547   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 7 lines ...
Machine	i-074f7d61809cdb109				machine "i-074f7d61809cdb109" has not yet joined cluster
Machine	i-0a6719102f46d9f15				machine "i-0a6719102f46d9f15" has not yet joined cluster
Machine	i-0c87dd0e7e7e66410				machine "i-0c87dd0e7e7e66410" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-2bmm5	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2bmm5" is pending
Pod	kube-system/coredns-f45c4bf76-xmf8k		system-cluster-critical pod "coredns-f45c4bf76-xmf8k" is pending

Validation Failed
W0717 20:49:25.325556   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 11 lines ...
Node	ip-172-20-55-234.eu-west-3.compute.internal	node "ip-172-20-55-234.eu-west-3.compute.internal" of role "node" is not ready
Pod	kube-system/aws-node-nm6qb			system-node-critical pod "aws-node-nm6qb" is pending
Pod	kube-system/aws-node-z7hdj			system-node-critical pod "aws-node-z7hdj" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2bmm5	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2bmm5" is pending
Pod	kube-system/coredns-f45c4bf76-xmf8k		system-cluster-critical pod "coredns-f45c4bf76-xmf8k" is pending

Validation Failed
W0717 20:49:37.325449   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 13 lines ...
Pod	kube-system/aws-node-99rlh			system-node-critical pod "aws-node-99rlh" is pending
Pod	kube-system/aws-node-nm6qb			system-node-critical pod "aws-node-nm6qb" is pending
Pod	kube-system/aws-node-z7hdj			system-node-critical pod "aws-node-z7hdj" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2bmm5	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2bmm5" is pending
Pod	kube-system/coredns-f45c4bf76-xmf8k		system-cluster-critical pod "coredns-f45c4bf76-xmf8k" is pending

Validation Failed
W0717 20:49:49.221957   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 12 lines ...
Node	ip-172-20-56-168.eu-west-3.compute.internal	node "ip-172-20-56-168.eu-west-3.compute.internal" of role "node" is not ready
Pod	kube-system/aws-node-99rlh			system-node-critical pod "aws-node-99rlh" is pending
Pod	kube-system/aws-node-nm6qb			system-node-critical pod "aws-node-nm6qb" is not ready (aws-node)
Pod	kube-system/coredns-autoscaler-6f594f4c58-2bmm5	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2bmm5" is pending
Pod	kube-system/coredns-f45c4bf76-xmf8k		system-cluster-critical pod "coredns-f45c4bf76-xmf8k" is pending

Validation Failed
W0717 20:50:01.266813   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 12 lines ...
Node	ip-172-20-56-168.eu-west-3.compute.internal	node "ip-172-20-56-168.eu-west-3.compute.internal" of role "node" is not ready
Pod	kube-system/aws-node-99rlh			system-node-critical pod "aws-node-99rlh" is not ready (aws-node)
Pod	kube-system/aws-node-zv9kg			system-node-critical pod "aws-node-zv9kg" is pending
Pod	kube-system/coredns-autoscaler-6f594f4c58-2bmm5	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-2bmm5" is pending
Pod	kube-system/coredns-f45c4bf76-xmf8k		system-cluster-critical pod "coredns-f45c4bf76-xmf8k" is pending

Validation Failed
W0717 20:50:13.310748   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-36-75.eu-west-3.compute.internal	node "ip-172-20-36-75.eu-west-3.compute.internal" of role "node" is not ready
Pod	kube-system/aws-node-zv9kg			system-node-critical pod "aws-node-zv9kg" is pending

Validation Failed
W0717 20:50:25.325434   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 51 lines ...
ip-172-20-56-168.eu-west-3.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-36-75.eu-west-3.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-36-75.eu-west-3.compute.internal" is pending

Validation Failed
W0717 20:51:12.927801   11732 validate_cluster.go:232] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-3a	Master	c5.large	1	1	eu-west-3a
nodes-eu-west-3a	Node	t3.large	4	4	eu-west-3a

... skipping 769 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 289 lines ...
STEP: Destroying namespace "services-5142" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
STEP: Destroying namespace "services-4171" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":1,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:53:49.831: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Jul 17 20:53:47.475: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2136" to be "Succeeded or Failed"
Jul 17 20:53:47.579: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 104.655908ms
Jul 17 20:53:49.684: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209198838s
Jul 17 20:53:51.790: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315432039s
STEP: Saw pod success
Jul 17 20:53:51.790: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul 17 20:53:51.894: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jul 17 20:53:52.122: INFO: Waiting for pod pod-host-path-test to disappear
Jul 17 20:53:52.228: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:6.229 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0717 20:53:47.303010   12350 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 17 20:53:47.303: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 17 20:53:52.257: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:53:52.808: INFO: Driver emptydir doesn't support ext3 -- skipping
... skipping 58 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 17 20:53:48.369: INFO: Waiting up to 5m0s for pod "pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5" in namespace "emptydir-487" to be "Succeeded or Failed"
Jul 17 20:53:48.474: INFO: Pod "pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5": Phase="Pending", Reason="", readiness=false. Elapsed: 104.616955ms
Jul 17 20:53:50.579: INFO: Pod "pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209989564s
Jul 17 20:53:52.684: INFO: Pod "pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315215562s
STEP: Saw pod success
Jul 17 20:53:52.684: INFO: Pod "pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5" satisfied condition "Succeeded or Failed"
Jul 17 20:53:52.797: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5 container test-container: <nil>
STEP: delete the pod
Jul 17 20:53:53.025: INFO: Waiting for pod pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5 to disappear
Jul 17 20:53:53.130: INFO: Pod pod-114c79a5-754f-4889-86ab-1f1bfdc3d4d5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    files with FSGroup ownership should support (root,0644,tmpfs)
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "apply-7941" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":2,"skipped":13,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:53:54.204: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 50 lines ...
• [SLOW TEST:7.351 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:53:54.556: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 71 lines ...
• [SLOW TEST:9.186 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 20:53:47.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7" in namespace "downward-api-8257" to be "Succeeded or Failed"
Jul 17 20:53:47.130: INFO: Pod "downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7": Phase="Pending", Reason="", readiness=false. Elapsed: 106.803525ms
Jul 17 20:53:49.234: INFO: Pod "downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211043136s
Jul 17 20:53:51.338: INFO: Pod "downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314386591s
Jul 17 20:53:53.447: INFO: Pod "downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7": Phase="Running", Reason="", readiness=true. Elapsed: 6.423432367s
Jul 17 20:53:55.549: INFO: Pod "downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.525778388s
STEP: Saw pod success
Jul 17 20:53:55.549: INFO: Pod "downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7" satisfied condition "Succeeded or Failed"
Jul 17 20:53:55.651: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7 container client-container: <nil>
STEP: delete the pod
Jul 17 20:53:56.339: INFO: Waiting for pod downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7 to disappear
Jul 17 20:53:56.440: INFO: Pod downwardapi-volume-9551188f-e0ea-4168-ae12-0061060367c7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.462 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:53:56.760: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 20:53:47.043: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d" in namespace "projected-6620" to be "Succeeded or Failed"
Jul 17 20:53:47.145: INFO: Pod "downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.026388ms
Jul 17 20:53:49.250: INFO: Pod "downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206525158s
Jul 17 20:53:51.356: INFO: Pod "downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31287033s
Jul 17 20:53:53.462: INFO: Pod "downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418676346s
Jul 17 20:53:55.565: INFO: Pod "downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.521391015s
STEP: Saw pod success
Jul 17 20:53:55.565: INFO: Pod "downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d" satisfied condition "Succeeded or Failed"
Jul 17 20:53:55.666: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d container client-container: <nil>
STEP: delete the pod
Jul 17 20:53:56.371: INFO: Waiting for pod downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d to disappear
Jul 17 20:53:56.474: INFO: Pod downwardapi-volume-b8286043-2167-420a-8057-b6b76980b19d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.476 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:53:56.785: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 74 lines ...
• [SLOW TEST:12.309 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:53:58.329: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 25 lines ...
W0717 20:53:48.751792   12441 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul 17 20:53:48.751: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Jul 17 20:53:49.059: INFO: Waiting up to 5m0s for pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb" in namespace "var-expansion-1022" to be "Succeeded or Failed"
Jul 17 20:53:49.162: INFO: Pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.57524ms
Jul 17 20:53:51.264: INFO: Pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204908312s
Jul 17 20:53:53.373: INFO: Pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31399373s
Jul 17 20:53:55.485: INFO: Pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425644729s
Jul 17 20:53:57.588: INFO: Pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529157283s
Jul 17 20:53:59.692: INFO: Pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.632644312s
STEP: Saw pod success
Jul 17 20:53:59.692: INFO: Pod "var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb" satisfied condition "Succeeded or Failed"
Jul 17 20:53:59.794: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb container dapi-container: <nil>
STEP: delete the pod
Jul 17 20:54:00.010: INFO: Waiting for pod var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb to disappear
Jul 17 20:54:00.113: INFO: Pod var-expansion-3c1d0a31-9341-4b2d-aa16-56e06d2860fb no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:13.980 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:00.431: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 91 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1345
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":1,"skipped":20,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:53:54.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an image specified user ID
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
Jul 17 20:53:55.229: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-6441" to be "Succeeded or Failed"
Jul 17 20:53:55.332: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 102.915789ms
Jul 17 20:53:57.440: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210729179s
Jul 17 20:53:59.543: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313817845s
Jul 17 20:54:01.646: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417244392s
Jul 17 20:54:03.750: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.521189215s
Jul 17 20:54:03.750: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:03.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6441" for this suite.


... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an image specified user ID
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":9,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:04.400: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 99 lines ...
• [SLOW TEST:18.605 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:05.005: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 190 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:06.053: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903
    should check if kubectl can dry-run update Pods [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Ingress API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:07.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-3843" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:07.614: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 62 lines ...
Jul 17 20:54:00.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 17 20:54:01.079: INFO: Waiting up to 5m0s for pod "pod-2981d570-4590-4cae-90c3-eecbcdacd477" in namespace "emptydir-7062" to be "Succeeded or Failed"
Jul 17 20:54:01.181: INFO: Pod "pod-2981d570-4590-4cae-90c3-eecbcdacd477": Phase="Pending", Reason="", readiness=false. Elapsed: 102.021953ms
Jul 17 20:54:03.283: INFO: Pod "pod-2981d570-4590-4cae-90c3-eecbcdacd477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204484383s
Jul 17 20:54:05.386: INFO: Pod "pod-2981d570-4590-4cae-90c3-eecbcdacd477": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307681901s
Jul 17 20:54:07.490: INFO: Pod "pod-2981d570-4590-4cae-90c3-eecbcdacd477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.411402509s
STEP: Saw pod success
Jul 17 20:54:07.490: INFO: Pod "pod-2981d570-4590-4cae-90c3-eecbcdacd477" satisfied condition "Succeeded or Failed"
Jul 17 20:54:07.592: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-2981d570-4590-4cae-90c3-eecbcdacd477 container test-container: <nil>
STEP: delete the pod
Jul 17 20:54:07.805: INFO: Waiting for pod pod-2981d570-4590-4cae-90c3-eecbcdacd477 to disappear
Jul 17 20:54:07.907: INFO: Pod pod-2981d570-4590-4cae-90c3-eecbcdacd477 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.660 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:53:53.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:10.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3266" for this suite.


• [SLOW TEST:16.967 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:53:46.434: INFO: >>> kubeConfig: /root/.kube/config
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:11.102: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 130 lines ...
Jul 17 20:53:58.541: INFO: PersistentVolumeClaim pvc-gvhns found but phase is Pending instead of Bound.
Jul 17 20:54:00.646: INFO: PersistentVolumeClaim pvc-gvhns found and phase=Bound (2.215429465s)
Jul 17 20:54:00.646: INFO: Waiting up to 3m0s for PersistentVolume local-b5fg7 to have phase Bound
Jul 17 20:54:00.751: INFO: PersistentVolume local-b5fg7 found and phase=Bound (104.771368ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jkhs
STEP: Creating a pod to test subpath
Jul 17 20:54:01.071: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jkhs" in namespace "provisioning-4612" to be "Succeeded or Failed"
Jul 17 20:54:01.176: INFO: Pod "pod-subpath-test-preprovisionedpv-jkhs": Phase="Pending", Reason="", readiness=false. Elapsed: 105.208516ms
Jul 17 20:54:03.282: INFO: Pod "pod-subpath-test-preprovisionedpv-jkhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211016318s
Jul 17 20:54:05.387: INFO: Pod "pod-subpath-test-preprovisionedpv-jkhs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31667084s
Jul 17 20:54:07.493: INFO: Pod "pod-subpath-test-preprovisionedpv-jkhs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422118759s
Jul 17 20:54:09.598: INFO: Pod "pod-subpath-test-preprovisionedpv-jkhs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.527470686s
STEP: Saw pod success
Jul 17 20:54:09.598: INFO: Pod "pod-subpath-test-preprovisionedpv-jkhs" satisfied condition "Succeeded or Failed"
Jul 17 20:54:09.703: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-jkhs container test-container-volume-preprovisionedpv-jkhs: <nil>
STEP: delete the pod
Jul 17 20:54:09.924: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jkhs to disappear
Jul 17 20:54:10.030: INFO: Pod pod-subpath-test-preprovisionedpv-jkhs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jkhs
Jul 17 20:54:10.030: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jkhs" in namespace "provisioning-4612"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:11.497: INFO: Only supported for providers [gce gke] (not aws)
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:11.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-7572" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":2,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 20:54:08.270: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927" in namespace "projected-3970" to be "Succeeded or Failed"
Jul 17 20:54:08.373: INFO: Pod "downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927": Phase="Pending", Reason="", readiness=false. Elapsed: 102.900449ms
Jul 17 20:54:10.476: INFO: Pod "downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205717064s
Jul 17 20:54:12.580: INFO: Pod "downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309630265s
STEP: Saw pod success
Jul 17 20:54:12.580: INFO: Pod "downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927" satisfied condition "Succeeded or Failed"
Jul 17 20:54:12.683: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927 container client-container: <nil>
STEP: delete the pod
Jul 17 20:54:12.898: INFO: Waiting for pod downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927 to disappear
Jul 17 20:54:13.000: INFO: Pod downwardapi-volume-ec8bfc0a-aafb-4e74-aeb4-c7f7f47f7927 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":3,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:14.346: INFO: Only supported for providers [vsphere] (not aws)
... skipping 14 lines ...
      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:53:56.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
• [SLOW TEST:18.413 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 93 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:15.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-7915" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:15.267: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
• [SLOW TEST:23.395 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:583
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:16.229: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:17.153: INFO: Only supported for providers [gce gke] (not aws)
... skipping 105 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
Jul 17 20:54:01.603: INFO: PersistentVolumeClaim pvc-6mjd9 found and phase=Bound (103.062505ms)
Jul 17 20:54:01.603: INFO: Waiting up to 3m0s for PersistentVolume nfs-dfwnw to have phase Bound
Jul 17 20:54:01.706: INFO: PersistentVolume nfs-dfwnw found and phase=Bound (102.957966ms)
STEP: Checking pod has write access to PersistentVolume
Jul 17 20:54:01.916: INFO: Creating nfs test pod
Jul 17 20:54:02.020: INFO: Pod should terminate with exitcode 0 (success)
Jul 17 20:54:02.020: INFO: Waiting up to 5m0s for pod "pvc-tester-h75cs" in namespace "pv-7530" to be "Succeeded or Failed"
Jul 17 20:54:02.123: INFO: Pod "pvc-tester-h75cs": Phase="Pending", Reason="", readiness=false. Elapsed: 102.903502ms
Jul 17 20:54:04.227: INFO: Pod "pvc-tester-h75cs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206933835s
Jul 17 20:54:06.348: INFO: Pod "pvc-tester-h75cs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328055351s
Jul 17 20:54:08.452: INFO: Pod "pvc-tester-h75cs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.431758995s
STEP: Saw pod success
Jul 17 20:54:08.452: INFO: Pod "pvc-tester-h75cs" satisfied condition "Succeeded or Failed"
Jul 17 20:54:08.452: INFO: Pod pvc-tester-h75cs succeeded 
Jul 17 20:54:08.452: INFO: Deleting pod "pvc-tester-h75cs" in namespace "pv-7530"
Jul 17 20:54:08.559: INFO: Wait up to 5m0s for pod "pvc-tester-h75cs" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul 17 20:54:08.661: INFO: Deleting PVC pvc-6mjd9 to trigger reclamation of PV 
Jul 17 20:54:08.661: INFO: Deleting PersistentVolumeClaim "pvc-6mjd9"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access 
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:17.711: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-6phw
STEP: Creating a pod to test atomic-volume-subpath
Jul 17 20:53:47.267: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6phw" in namespace "subpath-6765" to be "Succeeded or Failed"
Jul 17 20:53:47.376: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Pending", Reason="", readiness=false. Elapsed: 108.70131ms
Jul 17 20:53:49.482: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.215498783s
Jul 17 20:53:51.587: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320541842s
Jul 17 20:53:53.693: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Running", Reason="", readiness=true. Elapsed: 6.426103824s
Jul 17 20:53:55.799: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Running", Reason="", readiness=true. Elapsed: 8.531845762s
Jul 17 20:53:57.912: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Running", Reason="", readiness=true. Elapsed: 10.645297656s
... skipping 4 lines ...
Jul 17 20:54:08.440: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Running", Reason="", readiness=true. Elapsed: 21.17326156s
Jul 17 20:54:10.545: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Running", Reason="", readiness=true. Elapsed: 23.278047659s
Jul 17 20:54:12.651: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Running", Reason="", readiness=true. Elapsed: 25.383604556s
Jul 17 20:54:14.759: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Running", Reason="", readiness=true. Elapsed: 27.492077992s
Jul 17 20:54:16.867: INFO: Pod "pod-subpath-test-configmap-6phw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.600181393s
STEP: Saw pod success
Jul 17 20:54:16.867: INFO: Pod "pod-subpath-test-configmap-6phw" satisfied condition "Succeeded or Failed"
Jul 17 20:54:16.975: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-configmap-6phw container test-container-subpath-configmap-6phw: <nil>
STEP: delete the pod
Jul 17 20:54:17.202: INFO: Waiting for pod pod-subpath-test-configmap-6phw to disappear
Jul 17 20:54:17.314: INFO: Pod pod-subpath-test-configmap-6phw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6phw
Jul 17 20:54:17.314: INFO: Deleting pod "pod-subpath-test-configmap-6phw" in namespace "subpath-6765"
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:17.754: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 235 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:19.936: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:19.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-5530" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":4,"skipped":48,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:14.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-d66b4548-837f-4c72-bf59-5486ce687b00
STEP: Creating a pod to test consume configMaps
Jul 17 20:54:15.140: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059" in namespace "projected-7313" to be "Succeeded or Failed"
Jul 17 20:54:15.246: INFO: Pod "pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059": Phase="Pending", Reason="", readiness=false. Elapsed: 105.949601ms
Jul 17 20:54:17.348: INFO: Pod "pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208241292s
Jul 17 20:54:19.454: INFO: Pod "pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314397186s
Jul 17 20:54:21.559: INFO: Pod "pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.419666403s
STEP: Saw pod success
Jul 17 20:54:21.560: INFO: Pod "pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059" satisfied condition "Succeeded or Failed"
Jul 17 20:54:21.661: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 20:54:21.874: INFO: Waiting for pod pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059 to disappear
Jul 17 20:54:21.976: INFO: Pod pod-projected-configmaps-7853dbfd-8b68-42b9-905a-2c5a00e05059 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.820 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:22.204: INFO: Only supported for providers [azure] (not aws)
... skipping 67 lines ...
Jul 17 20:54:14.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Jul 17 20:54:15.447: INFO: Waiting up to 5m0s for pod "client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b" in namespace "containers-6774" to be "Succeeded or Failed"
Jul 17 20:54:15.556: INFO: Pod "client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b": Phase="Pending", Reason="", readiness=false. Elapsed: 108.861875ms
Jul 17 20:54:17.659: INFO: Pod "client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211706745s
Jul 17 20:54:19.763: INFO: Pod "client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315537182s
Jul 17 20:54:21.866: INFO: Pod "client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418607964s
STEP: Saw pod success
Jul 17 20:54:21.866: INFO: Pod "client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b" satisfied condition "Succeeded or Failed"
Jul 17 20:54:21.968: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b container agnhost-container: <nil>
STEP: delete the pod
Jul 17 20:54:22.177: INFO: Waiting for pod client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b to disappear
Jul 17 20:54:22.283: INFO: Pod client-containers-adea1f3b-600f-4293-ae8d-43eaa249d79b no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.705 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:22.514: INFO: Only supported for providers [openstack] (not aws)
... skipping 96 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:22.541: INFO: Only supported for providers [azure] (not aws)
... skipping 85 lines ...
• [SLOW TEST:7.353 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Jul 17 20:54:16.869: INFO: Waiting up to 5m0s for pod "metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259" in namespace "downward-api-2419" to be "Succeeded or Failed"
Jul 17 20:54:16.974: INFO: Pod "metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259": Phase="Pending", Reason="", readiness=false. Elapsed: 104.553641ms
Jul 17 20:54:19.076: INFO: Pod "metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206995266s
Jul 17 20:54:21.179: INFO: Pod "metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310196807s
Jul 17 20:54:23.282: INFO: Pod "metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412730907s
Jul 17 20:54:25.385: INFO: Pod "metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.515834915s
STEP: Saw pod success
Jul 17 20:54:25.385: INFO: Pod "metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259" satisfied condition "Succeeded or Failed"
Jul 17 20:54:25.487: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259 container client-container: <nil>
STEP: delete the pod
Jul 17 20:54:25.698: INFO: Waiting for pod metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259 to disappear
Jul 17 20:54:25.800: INFO: Pod metadata-volume-df9b9690-b1dc-4761-8a0b-adcbb8e43259 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.768 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:26.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6572" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:9.156 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should ensure a single API token exists
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:52
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should ensure a single API token exists","total":-1,"completed":3,"skipped":23,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:26.985: INFO: Only supported for providers [openstack] (not aws)
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:19.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:9.397 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:28.918: INFO: Only supported for providers [openstack] (not aws)
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:13.913 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:36.201: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 75 lines ...
• [SLOW TEST:15.433 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:38.045: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
STEP: Creating a pod to test downward API volume plugin
Jul 17 20:54:29.644: INFO: Waiting up to 5m0s for pod "metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49" in namespace "downward-api-2976" to be "Succeeded or Failed"
Jul 17 20:54:29.748: INFO: Pod "metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49": Phase="Pending", Reason="", readiness=false. Elapsed: 104.119487ms
Jul 17 20:54:31.853: INFO: Pod "metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209402328s
Jul 17 20:54:33.962: INFO: Pod "metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317451376s
Jul 17 20:54:36.076: INFO: Pod "metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.432220881s
Jul 17 20:54:38.181: INFO: Pod "metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.537035228s
STEP: Saw pod success
Jul 17 20:54:38.181: INFO: Pod "metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49" satisfied condition "Succeeded or Failed"
Jul 17 20:54:38.285: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49 container client-container: <nil>
STEP: delete the pod
Jul 17 20:54:38.501: INFO: Waiting for pod metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49 to disappear
Jul 17 20:54:38.605: INFO: Pod metadata-volume-4ff7336c-47af-41ee-bb58-5e0ad30ada49 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.818 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:106
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:38.831: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 174 lines ...
      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:13.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:47
    should be mountable
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48
------------------------------
{"msg":"PASSED [sig-storage] Volumes ConfigMap should be mountable","total":-1,"completed":5,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:39.987: INFO: Only supported for providers [gce gke] (not aws)
... skipping 60 lines ...
Jul 17 20:54:28.067: INFO: PersistentVolumeClaim pvc-gdcnf found but phase is Pending instead of Bound.
Jul 17 20:54:30.174: INFO: PersistentVolumeClaim pvc-gdcnf found and phase=Bound (6.419675481s)
Jul 17 20:54:30.174: INFO: Waiting up to 3m0s for PersistentVolume local-jjb7n to have phase Bound
Jul 17 20:54:30.279: INFO: PersistentVolume local-jjb7n found and phase=Bound (104.784406ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-cvcm
STEP: Creating a pod to test subpath
Jul 17 20:54:30.595: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-cvcm" in namespace "provisioning-2047" to be "Succeeded or Failed"
Jul 17 20:54:30.699: INFO: Pod "pod-subpath-test-preprovisionedpv-cvcm": Phase="Pending", Reason="", readiness=false. Elapsed: 104.428483ms
Jul 17 20:54:32.805: INFO: Pod "pod-subpath-test-preprovisionedpv-cvcm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210305938s
Jul 17 20:54:34.923: INFO: Pod "pod-subpath-test-preprovisionedpv-cvcm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32801316s
Jul 17 20:54:37.028: INFO: Pod "pod-subpath-test-preprovisionedpv-cvcm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433675805s
Jul 17 20:54:39.133: INFO: Pod "pod-subpath-test-preprovisionedpv-cvcm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538387575s
Jul 17 20:54:41.239: INFO: Pod "pod-subpath-test-preprovisionedpv-cvcm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.644365438s
STEP: Saw pod success
Jul 17 20:54:41.239: INFO: Pod "pod-subpath-test-preprovisionedpv-cvcm" satisfied condition "Succeeded or Failed"
Jul 17 20:54:41.344: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-cvcm container test-container-subpath-preprovisionedpv-cvcm: <nil>
STEP: delete the pod
Jul 17 20:54:41.572: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-cvcm to disappear
Jul 17 20:54:41.680: INFO: Pod pod-subpath-test-preprovisionedpv-cvcm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-cvcm
Jul 17 20:54:41.680: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-cvcm" in namespace "provisioning-2047"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:43.158: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 158 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jul 17 20:54:34.606: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7123 exec httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jul 17 20:54:36.447: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Jul 17 20:54:36.447: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7123 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jul 17 20:54:37.819: INFO: rc: 255
Jul 17 20:54:37.819: INFO: got err error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7123 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0717 20:54:37.665807     200 merged_client_builder.go:163] Using in-cluster namespace
I0717 20:54:37.666039     200 merged_client_builder.go:121] Using in-cluster configuration
I0717 20:54:37.670721     200 merged_client_builder.go:121] Using in-cluster configuration
I0717 20:54:37.677764     200 merged_client_builder.go:121] Using in-cluster configuration
I0717 20:54:37.678250     200 round_trippers.go:432] GET https://172.20.0.1:443/api/v1/namespaces/kubectl-7123/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0717 20:54:37.685642     200 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc000130001, 0xc0008bd500, 0x68, 0x1af)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3055420, 0xc000000003, 0x0, 0x0, 0xc0005f8d20, 0x25f2cf0, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3055420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc0009c66c0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0008dfc80, 0x3a, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207dd80, 0xc0009aa738, 0x1f07e88)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8a3
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0000902c0, 0xc0004ebb00, 0x1, 0x3)
... skipping 66 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:705 +0x6c5

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Jul 17 20:54:37.819: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7123 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jul 17 20:54:39.086: INFO: rc: 255
Jul 17 20:54:39.086: INFO: got err error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7123 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0717 20:54:38.929196     212 merged_client_builder.go:163] Using in-cluster namespace
I0717 20:54:38.940056     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 10 milliseconds
I0717 20:54:38.940129     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 172.20.0.10:53: no such host
I0717 20:54:38.955654     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 15 milliseconds
I0717 20:54:38.955727     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 172.20.0.10:53: no such host
I0717 20:54:38.955762     212 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 172.20.0.10:53: no such host
I0717 20:54:38.958726     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 2 milliseconds
I0717 20:54:38.958803     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 172.20.0.10:53: no such host
I0717 20:54:38.960911     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0717 20:54:38.960971     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 172.20.0.10:53: no such host
I0717 20:54:38.962704     212 round_trippers.go:454] GET http://invalid/api?timeout=32s  in 1 milliseconds
I0717 20:54:38.962747     212 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 172.20.0.10:53: no such host
I0717 20:54:38.962795     212 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 172.20.0.10:53: no such host
F0717 20:54:38.962848     212 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 172.20.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00012e001, 0xc0004c1a40, 0x88, 0x1b8)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1021 +0xb9
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x3055420, 0xc000000003, 0x0, 0x0, 0xc000739e30, 0x25f2cf0, 0xa, 0x73, 0x40e300)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:970 +0x191
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x3055420, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc000089ef0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:733 +0x16f
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1495
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000049aa0, 0x59, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x288
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x207d0e0, 0xc0001ac6c0, 0x1f07e88)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x935
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0004dab00, 0xc0003f3890, 0x1, 0x3)
... skipping 24 lines ...
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Jul 17 20:54:39.087: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7123 exec httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jul 17 20:54:40.351: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jul 17 20:54:40.351: INFO: stdout: "I0717 20:54:40.267101     224 merged_client_builder.go:121] Using in-cluster configuration\nI0717 20:54:40.271997     224 merged_client_builder.go:121] Using in-cluster configuration\nI0717 20:54:40.284778     224 merged_client_builder.go:121] Using in-cluster configuration\nI0717 20:54:40.292032     224 round_trippers.go:454] GET https://172.20.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 6 milliseconds\nNo resources found in invalid namespace.\n"
Jul 17 20:54:40.351: INFO: stdout: I0717 20:54:40.267101     224 merged_client_builder.go:121] Using in-cluster configuration
... skipping 75 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should handle in-cluster config
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:636
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:43.537: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 54 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":3,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:10.458: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Jul 17 20:54:28.542: INFO: PersistentVolumeClaim pvc-9s2vq found but phase is Pending instead of Bound.
Jul 17 20:54:30.650: INFO: PersistentVolumeClaim pvc-9s2vq found and phase=Bound (6.440334138s)
Jul 17 20:54:30.650: INFO: Waiting up to 3m0s for PersistentVolume local-t22bd to have phase Bound
Jul 17 20:54:30.754: INFO: PersistentVolume local-t22bd found and phase=Bound (104.345574ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mxb7
STEP: Creating a pod to test subpath
Jul 17 20:54:31.072: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mxb7" in namespace "provisioning-8842" to be "Succeeded or Failed"
Jul 17 20:54:31.183: INFO: Pod "pod-subpath-test-preprovisionedpv-mxb7": Phase="Pending", Reason="", readiness=false. Elapsed: 110.646441ms
Jul 17 20:54:33.289: INFO: Pod "pod-subpath-test-preprovisionedpv-mxb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216453022s
Jul 17 20:54:35.393: INFO: Pod "pod-subpath-test-preprovisionedpv-mxb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321312732s
Jul 17 20:54:37.500: INFO: Pod "pod-subpath-test-preprovisionedpv-mxb7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427431938s
Jul 17 20:54:39.605: INFO: Pod "pod-subpath-test-preprovisionedpv-mxb7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533212578s
Jul 17 20:54:41.711: INFO: Pod "pod-subpath-test-preprovisionedpv-mxb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639083883s
STEP: Saw pod success
Jul 17 20:54:41.711: INFO: Pod "pod-subpath-test-preprovisionedpv-mxb7" satisfied condition "Succeeded or Failed"
Jul 17 20:54:41.819: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-mxb7 container test-container-volume-preprovisionedpv-mxb7: <nil>
STEP: delete the pod
Jul 17 20:54:42.038: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mxb7 to disappear
Jul 17 20:54:42.143: INFO: Pod pod-subpath-test-preprovisionedpv-mxb7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mxb7
Jul 17 20:54:42.143: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mxb7" in namespace "provisioning-8842"
... skipping 48 lines ...
Jul 17 20:54:40.197: INFO: Got stdout from 15.236.247.177:22: Hello from ubuntu@ip-172-20-56-168
STEP: SSH'ing to 1 nodes and running echo "foo" | grep "bar"
STEP: SSH'ing to 1 nodes and running echo "stdout" && echo "stderr" >&2 && exit 7
Jul 17 20:54:43.426: INFO: Got stdout from 35.181.4.186:22: stdout
Jul 17 20:54:43.426: INFO: Got stderr from 35.181.4.186:22: stderr
STEP: SSH'ing to a nonexistent host
error dialing ubuntu@i.do.not.exist: 'dial tcp: address i.do.not.exist: missing port in address', retrying
[AfterEach] [sig-node] SSH
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:48.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ssh-6425" for this suite.


• [SLOW TEST:21.604 seconds]
[sig-node] SSH
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should SSH to all nodes and run commands
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45
------------------------------
{"msg":"PASSED [sig-node] SSH should SSH to all nodes and run commands","total":-1,"completed":4,"skipped":35,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 2 lines ...
Jul 17 20:54:05.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Jul 17 20:54:05.608: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 17 20:54:05.825: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-159" in namespace "volume-159" to be "Succeeded or Failed"
Jul 17 20:54:05.930: INFO: Pod "hostpath-symlink-prep-volume-159": Phase="Pending", Reason="", readiness=false. Elapsed: 104.751467ms
Jul 17 20:54:08.043: INFO: Pod "hostpath-symlink-prep-volume-159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218044402s
Jul 17 20:54:10.148: INFO: Pod "hostpath-symlink-prep-volume-159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.32327936s
STEP: Saw pod success
Jul 17 20:54:10.148: INFO: Pod "hostpath-symlink-prep-volume-159" satisfied condition "Succeeded or Failed"
Jul 17 20:54:10.148: INFO: Deleting pod "hostpath-symlink-prep-volume-159" in namespace "volume-159"
Jul 17 20:54:10.257: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-159" to be fully deleted
Jul 17 20:54:10.361: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Jul 17 20:54:14.688: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-159 exec hostpathsymlink-injector --namespace=volume-159 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-159' > /opt/0/index.html'
... skipping 36 lines ...
Jul 17 20:54:40.025: INFO: Pod hostpathsymlink-client still exists
Jul 17 20:54:41.918: INFO: Waiting for pod hostpathsymlink-client to disappear
Jul 17 20:54:42.023: INFO: Pod hostpathsymlink-client still exists
Jul 17 20:54:43.918: INFO: Waiting for pod hostpathsymlink-client to disappear
Jul 17 20:54:44.023: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Jul 17 20:54:44.134: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-159" in namespace "volume-159" to be "Succeeded or Failed"
Jul 17 20:54:44.239: INFO: Pod "hostpath-symlink-prep-volume-159": Phase="Pending", Reason="", readiness=false. Elapsed: 104.596971ms
Jul 17 20:54:46.344: INFO: Pod "hostpath-symlink-prep-volume-159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209676786s
Jul 17 20:54:48.450: INFO: Pod "hostpath-symlink-prep-volume-159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315546458s
STEP: Saw pod success
Jul 17 20:54:48.450: INFO: Pod "hostpath-symlink-prep-volume-159" satisfied condition "Succeeded or Failed"
Jul 17 20:54:48.450: INFO: Deleting pod "hostpath-symlink-prep-volume-159" in namespace "volume-159"
Jul 17 20:54:48.559: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-159" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:48.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-159" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":2,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:48.913: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 27 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:54:49.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":3,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:49.295: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 49 lines ...
• [SLOW TEST:10.559 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment reaping should cascade to its replica sets and pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:92
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":4,"skipped":31,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:50.251: INFO: Only supported for providers [gce gke] (not aws)
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:09.605: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
Jul 17 20:54:28.101: INFO: PersistentVolumeClaim pvc-6g9vf found but phase is Pending instead of Bound.
Jul 17 20:54:30.206: INFO: PersistentVolumeClaim pvc-6g9vf found and phase=Bound (12.728405111s)
Jul 17 20:54:30.207: INFO: Waiting up to 3m0s for PersistentVolume local-99qwk to have phase Bound
Jul 17 20:54:30.312: INFO: PersistentVolume local-99qwk found and phase=Bound (105.17113ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qfzd
STEP: Creating a pod to test subpath
Jul 17 20:54:30.627: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qfzd" in namespace "provisioning-3155" to be "Succeeded or Failed"
Jul 17 20:54:30.729: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 101.957748ms
Jul 17 20:54:32.832: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204943515s
Jul 17 20:54:34.934: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307749018s
Jul 17 20:54:37.038: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.411635569s
STEP: Saw pod success
Jul 17 20:54:37.038: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd" satisfied condition "Succeeded or Failed"
Jul 17 20:54:37.140: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-qfzd container test-container-subpath-preprovisionedpv-qfzd: <nil>
STEP: delete the pod
Jul 17 20:54:37.350: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qfzd to disappear
Jul 17 20:54:37.452: INFO: Pod pod-subpath-test-preprovisionedpv-qfzd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qfzd
Jul 17 20:54:37.452: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qfzd" in namespace "provisioning-3155"
STEP: Creating pod pod-subpath-test-preprovisionedpv-qfzd
STEP: Creating a pod to test subpath
Jul 17 20:54:37.657: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qfzd" in namespace "provisioning-3155" to be "Succeeded or Failed"
Jul 17 20:54:37.759: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 101.989433ms
Jul 17 20:54:39.864: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206191907s
Jul 17 20:54:41.966: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308710404s
Jul 17 20:54:44.069: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411518098s
Jul 17 20:54:46.172: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514780009s
Jul 17 20:54:48.283: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.625425571s
STEP: Saw pod success
Jul 17 20:54:48.283: INFO: Pod "pod-subpath-test-preprovisionedpv-qfzd" satisfied condition "Succeeded or Failed"
Jul 17 20:54:48.385: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-qfzd container test-container-subpath-preprovisionedpv-qfzd: <nil>
STEP: delete the pod
Jul 17 20:54:48.604: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qfzd to disappear
Jul 17 20:54:48.707: INFO: Pod pod-subpath-test-preprovisionedpv-qfzd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qfzd
Jul 17 20:54:48.707: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qfzd" in namespace "provisioning-3155"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":10,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":7,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:45.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul 17 20:54:46.416: INFO: Waiting up to 5m0s for pod "security-context-c9657874-4fcb-4715-ac66-5267dd2516c4" in namespace "security-context-2576" to be "Succeeded or Failed"
Jul 17 20:54:46.521: INFO: Pod "security-context-c9657874-4fcb-4715-ac66-5267dd2516c4": Phase="Pending", Reason="", readiness=false. Elapsed: 104.314972ms
Jul 17 20:54:48.626: INFO: Pod "security-context-c9657874-4fcb-4715-ac66-5267dd2516c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209718325s
Jul 17 20:54:50.732: INFO: Pod "security-context-c9657874-4fcb-4715-ac66-5267dd2516c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3161801s
Jul 17 20:54:52.838: INFO: Pod "security-context-c9657874-4fcb-4715-ac66-5267dd2516c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.421311374s
STEP: Saw pod success
Jul 17 20:54:52.838: INFO: Pod "security-context-c9657874-4fcb-4715-ac66-5267dd2516c4" satisfied condition "Succeeded or Failed"
Jul 17 20:54:52.942: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod security-context-c9657874-4fcb-4715-ac66-5267dd2516c4 container test-container: <nil>
STEP: delete the pod
Jul 17 20:54:53.165: INFO: Waiting for pod security-context-c9657874-4fcb-4715-ac66-5267dd2516c4 to disappear
Jul 17 20:54:53.269: INFO: Pod security-context-c9657874-4fcb-4715-ac66-5267dd2516c4 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.703 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":5,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:53.500: INFO: Only supported for providers [vsphere] (not aws)
... skipping 70 lines ...
      Driver "csi-hostpath" does not define supported mount option - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:181
------------------------------
SSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":6,"skipped":20,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:40.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":7,"skipped":20,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:54.558: INFO: Only supported for providers [openstack] (not aws)
... skipping 150 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:58.343: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 142 lines ...
• [SLOW TEST:10.166 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:58.827: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:54:58.827: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 187 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:26.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 117 lines ...
Jul 17 20:54:58.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Jul 17 20:54:59.476: INFO: Waiting up to 5m0s for pod "client-containers-b9b5b6cf-7270-4dc0-88ad-0d91b2567ee3" in namespace "containers-420" to be "Succeeded or Failed"
Jul 17 20:54:59.580: INFO: Pod "client-containers-b9b5b6cf-7270-4dc0-88ad-0d91b2567ee3": Phase="Pending", Reason="", readiness=false. Elapsed: 103.749596ms
Jul 17 20:55:01.685: INFO: Pod "client-containers-b9b5b6cf-7270-4dc0-88ad-0d91b2567ee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208254085s
STEP: Saw pod success
Jul 17 20:55:01.685: INFO: Pod "client-containers-b9b5b6cf-7270-4dc0-88ad-0d91b2567ee3" satisfied condition "Succeeded or Failed"
Jul 17 20:55:01.788: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod client-containers-b9b5b6cf-7270-4dc0-88ad-0d91b2567ee3 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 20:55:02.002: INFO: Waiting for pod client-containers-b9b5b6cf-7270-4dc0-88ad-0d91b2567ee3 to disappear
Jul 17 20:55:02.106: INFO: Pod client-containers-b9b5b6cf-7270-4dc0-88ad-0d91b2567ee3 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:02.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-420" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:02.359: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 32 lines ...
Jul 17 20:54:26.555: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-85349pk6d
STEP: creating a claim
Jul 17 20:54:26.659: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-6ldg
STEP: Creating a pod to test subpath
Jul 17 20:54:26.969: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-6ldg" in namespace "provisioning-8534" to be "Succeeded or Failed"
Jul 17 20:54:27.071: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 101.920998ms
Jul 17 20:54:29.174: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204742446s
Jul 17 20:54:31.278: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308566922s
Jul 17 20:54:33.382: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412315457s
Jul 17 20:54:35.484: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515011894s
Jul 17 20:54:37.592: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.622675981s
Jul 17 20:54:39.698: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.728212172s
Jul 17 20:54:41.801: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.831371099s
Jul 17 20:54:43.904: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.934930201s
Jul 17 20:54:46.007: INFO: Pod "pod-subpath-test-dynamicpv-6ldg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.03763852s
STEP: Saw pod success
Jul 17 20:54:46.007: INFO: Pod "pod-subpath-test-dynamicpv-6ldg" satisfied condition "Succeeded or Failed"
Jul 17 20:54:46.109: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-6ldg container test-container-subpath-dynamicpv-6ldg: <nil>
STEP: delete the pod
Jul 17 20:54:46.318: INFO: Waiting for pod pod-subpath-test-dynamicpv-6ldg to disappear
Jul 17 20:54:46.428: INFO: Pod pod-subpath-test-dynamicpv-6ldg no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-6ldg
Jul 17 20:54:46.428: INFO: Deleting pod "pod-subpath-test-dynamicpv-6ldg" in namespace "provisioning-8534"
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:02.677: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 92 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":2,"skipped":20,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:04.074: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 121 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity used, insufficient capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity","total":-1,"completed":3,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:30.298 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":3,"skipped":17,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:01.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:453
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:454
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:13.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-3309" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field","total":-1,"completed":5,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:14.817: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 64 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":7,"skipped":55,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:14.862: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 57 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  GlusterDynamicProvisioner
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:793
    should create and delete persistent volumes [fast]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:794
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","total":-1,"completed":8,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
Jul 17 20:54:03.003: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-kk8v9] to have phase Bound
Jul 17 20:54:03.106: INFO: PersistentVolumeClaim pvc-kk8v9 found and phase=Bound (103.103954ms)
STEP: Deleting the previously created pod
Jul 17 20:54:17.626: INFO: Deleting pod "pvc-volume-tester-jtxzm" in namespace "csi-mock-volumes-6907"
Jul 17 20:54:17.731: INFO: Wait up to 5m0s for pod "pvc-volume-tester-jtxzm" to be fully deleted
STEP: Checking CSI driver logs
Jul 17 20:54:24.045: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/d0cd8846-2459-4b6e-bbec-a16796866c79/volumes/kubernetes.io~csi/pvc-78671b62-3256-4323-b0be-26773348fc32/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-jtxzm
Jul 17 20:54:24.045: INFO: Deleting pod "pvc-volume-tester-jtxzm" in namespace "csi-mock-volumes-6907"
STEP: Deleting claim pvc-kk8v9
Jul 17 20:54:24.371: INFO: Waiting up to 2m0s for PersistentVolume pvc-78671b62-3256-4323-b0be-26773348fc32 to get deleted
Jul 17 20:54:24.474: INFO: PersistentVolume pvc-78671b62-3256-4323-b0be-26773348fc32 found and phase=Released (103.149872ms)
Jul 17 20:54:26.578: INFO: PersistentVolume pvc-78671b62-3256-4323-b0be-26773348fc32 found and phase=Released (2.207162473s)
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should not be plumbed down when CSIDriver is not deployed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":1,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:19.135: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 135 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec using resource/name
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:428
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":5,"skipped":40,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":19,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:39.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Jul 17 20:54:39.628: INFO: PodSpec: initContainers in spec.initContainers
Jul 17 20:55:27.602: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e8864fe4-c4c7-46d0-b8db-6423c9a991a4", GenerateName:"", Namespace:"init-container-9130", SelfLink:"", UID:"dd99c02c-f180-45dc-84d3-418013af2b78", ResourceVersion:"5737", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152079, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"628574306"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a64e40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a64e58)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002a64e70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a64e88)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-dp2cs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002d76d00), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-dp2cs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-dp2cs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-dp2cs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f00028), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"ip-172-20-55-234.eu-west-3.compute.internal", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002c88b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f000a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002f000c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002f000c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002f000cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002a6f3c0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762152079, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762152079, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762152079, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63762152079, loc:(*time.Location)(0x9ddf5a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.55.234", PodIP:"172.20.51.190", PodIPs:[]v1.PodIP{v1.PodIP{IP:"172.20.51.190"}}, StartTime:(*v1.Time)(0xc002a64eb8), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c88c40)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c88cb0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://4913b5f162e4760edfb2acd600b0c5babc9c2ef9816736e72e36dbcb788b2fcb", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d76d80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d76d60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002f00144)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:27.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9130" for this suite.


• [SLOW TEST:48.695 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Firewall rule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:204

  Only supported for providers [gce] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/firewall.go:62
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:54:05.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
Jul 17 20:54:19.124: INFO: stdout: "externalip-test-sdtlr"
Jul 17 20:54:19.124: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:20.242: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.4.5 80\nConnection to 172.20.4.5 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:20.242: INFO: stdout: ""
Jul 17 20:54:21.243: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:24.393: INFO: rc: 1
Jul 17 20:54:24.393: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.4.5 80
nc: connect to 172.20.4.5 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:54:25.243: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:26.580: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.4.5 80\nConnection to 172.20.4.5 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:26.580: INFO: stdout: ""
Jul 17 20:54:27.243: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:30.414: INFO: rc: 1
Jul 17 20:54:30.415: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80:
Command stdout:

stderr:
+ nc -v -t -w 2 172.20.4.5 80
+ echo hostName
nc: connect to 172.20.4.5 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:54:31.243: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:34.389: INFO: rc: 1
Jul 17 20:54:34.389: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.4.5 80
nc: connect to 172.20.4.5 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:54:35.242: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:36.428: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.4.5 80\nConnection to 172.20.4.5 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:36.428: INFO: stdout: ""
Jul 17 20:54:37.243: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:40.345: INFO: rc: 1
Jul 17 20:54:40.345: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 172.20.4.5 80
nc: connect to 172.20.4.5 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:54:41.243: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:42.374: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.4.5 80\nConnection to 172.20.4.5 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:42.374: INFO: stdout: ""
Jul 17 20:54:43.243: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.20.4.5 80'
Jul 17 20:54:44.447: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.20.4.5 80\nConnection to 172.20.4.5 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:44.447: INFO: stdout: "externalip-test-sdtlr"
Jul 17 20:54:44.447: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:54:45.612: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:45.612: INFO: stdout: ""
Jul 17 20:54:46.612: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:54:49.768: INFO: rc: 1
Jul 17 20:54:49.768: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 203.0.113.250 80
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:54:50.612: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:54:51.762: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:51.762: INFO: stdout: ""
Jul 17 20:54:52.613: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:54:55.736: INFO: rc: 1
Jul 17 20:54:55.736: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 203.0.113.250 80
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:54:56.612: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:54:57.774: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jul 17 20:54:57.774: INFO: stdout: ""
Jul 17 20:54:58.613: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:01.740: INFO: rc: 1
Jul 17 20:55:01.740: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 203.0.113.250 80
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:55:02.613: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:05.741: INFO: rc: 1
Jul 17 20:55:05.741: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ + echonc hostName
 -v -t -w 2 203.0.113.250 80
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:55:06.612: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:07.762: INFO: stderr: "+ nc -v -t -w 2 203.0.113.250 80\n+ echo hostName\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jul 17 20:55:07.762: INFO: stdout: ""
Jul 17 20:55:08.613: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:11.758: INFO: rc: 1
Jul 17 20:55:11.758: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 203.0.113.250 80
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:55:12.612: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:15.747: INFO: rc: 1
Jul 17 20:55:15.747: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 203.0.113.250 80
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:55:16.613: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:19.736: INFO: rc: 1
Jul 17 20:55:19.736: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 203.0.113.250 80
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:55:20.612: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:23.761: INFO: rc: 1
Jul 17 20:55:23.761: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ nc -v -t -w 2 203.0.113.250 80+ 
echo hostName
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:55:24.612: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:27.782: INFO: rc: 1
Jul 17 20:55:27.782: INFO: Service reachability failing with error: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80:
Command stdout:

stderr:
+ nc -v -t -w 2 203.0.113.250 80
+ echo hostName
nc: connect to 203.0.113.250 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul 17 20:55:28.613: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7067 exec execpodrvbvj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 203.0.113.250 80'
Jul 17 20:55:29.783: INFO: stderr: "+ + echo hostNamenc\n -v -t -w 2 203.0.113.250 80\nConnection to 203.0.113.250 80 port [tcp/http] succeeded!\n"
Jul 17 20:55:29.783: INFO: stdout: "externalip-test-sdtlr"
[AfterEach] [sig-network] Services
... skipping 7 lines ...
• [SLOW TEST:84.220 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1177
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:30.009: INFO: Only supported for providers [vsphere] (not aws)
... skipping 74 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 146 lines ...
Jul 17 20:54:59.715: INFO: PersistentVolumeClaim pvc-gfh8c found but phase is Pending instead of Bound.
Jul 17 20:55:01.821: INFO: PersistentVolumeClaim pvc-gfh8c found and phase=Bound (12.735676561s)
Jul 17 20:55:01.821: INFO: Waiting up to 3m0s for PersistentVolume local-fxp9r to have phase Bound
Jul 17 20:55:01.925: INFO: PersistentVolume local-fxp9r found and phase=Bound (104.324761ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ckv5
STEP: Creating a pod to test atomic-volume-subpath
Jul 17 20:55:02.240: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ckv5" in namespace "provisioning-4840" to be "Succeeded or Failed"
Jul 17 20:55:02.345: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Pending", Reason="", readiness=false. Elapsed: 104.697706ms
Jul 17 20:55:04.450: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210377892s
Jul 17 20:55:06.556: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316480603s
Jul 17 20:55:08.662: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Running", Reason="", readiness=true. Elapsed: 6.421990219s
Jul 17 20:55:10.768: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Running", Reason="", readiness=true. Elapsed: 8.528238646s
Jul 17 20:55:12.874: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Running", Reason="", readiness=true. Elapsed: 10.633715916s
... skipping 3 lines ...
Jul 17 20:55:21.300: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Running", Reason="", readiness=true. Elapsed: 19.060069895s
Jul 17 20:55:23.407: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Running", Reason="", readiness=true. Elapsed: 21.166816185s
Jul 17 20:55:25.511: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Running", Reason="", readiness=true. Elapsed: 23.271418019s
Jul 17 20:55:27.617: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Running", Reason="", readiness=true. Elapsed: 25.376827925s
Jul 17 20:55:29.721: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.481632608s
STEP: Saw pod success
Jul 17 20:55:29.722: INFO: Pod "pod-subpath-test-preprovisionedpv-ckv5" satisfied condition "Succeeded or Failed"
Jul 17 20:55:29.826: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-ckv5 container test-container-subpath-preprovisionedpv-ckv5: <nil>
STEP: delete the pod
Jul 17 20:55:30.042: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ckv5 to disappear
Jul 17 20:55:30.147: INFO: Pod pod-subpath-test-preprovisionedpv-ckv5 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ckv5
Jul 17 20:55:30.147: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ckv5" in namespace "provisioning-4840"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:33.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5577" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":6,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
STEP: creating execpod-noendpoints on node ip-172-20-38-184.eu-west-3.compute.internal
Jul 17 20:54:36.972: INFO: Creating new exec pod
Jul 17 20:54:43.286: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node ip-172-20-38-184.eu-west-3.compute.internal
Jul 17 20:54:43.286: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9507 exec execpod-noendpointsgv5dm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jul 17 20:55:19.448: INFO: rc: 1
Jul 17 20:55:19.448: INFO: error didn't contain 'REFUSED', keep trying: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9507 exec execpod-noendpointsgv5dm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
DNS: lookup no-pods on 172.20.0.10:53: no such host
command terminated with exit code 1

error:
exit status 1
Jul 17 20:55:21.449: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9507 exec execpod-noendpointsgv5dm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jul 17 20:55:25.591: INFO: rc: 1
Jul 17 20:55:25.591: INFO: error didn't contain 'REFUSED', keep trying: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9507 exec execpod-noendpointsgv5dm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
TIMEOUT
command terminated with exit code 1

error:
exit status 1
Jul 17 20:55:27.450: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9507 exec execpod-noendpointsgv5dm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jul 17 20:55:33.693: INFO: rc: 1
Jul 17 20:55:33.694: INFO: error contained 'REFUSED', as expected: error running /tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-9507 exec execpod-noendpointsgv5dm -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:33.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9507" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
• [SLOW TEST:57.652 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be rejected when no endpoints exist
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1968
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":6,"skipped":58,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:33.913: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:35.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7812" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":7,"skipped":78,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:30.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-7f639f8a-8bfb-4365-a3af-1503053fc17a
STEP: Creating a pod to test consume secrets
Jul 17 20:55:31.520: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c" in namespace "projected-5116" to be "Succeeded or Failed"
Jul 17 20:55:31.623: INFO: Pod "pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 103.454587ms
Jul 17 20:55:33.728: INFO: Pod "pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208531043s
Jul 17 20:55:35.832: INFO: Pod "pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312313252s
Jul 17 20:55:37.937: INFO: Pod "pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.417227484s
STEP: Saw pod success
Jul 17 20:55:37.937: INFO: Pod "pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c" satisfied condition "Succeeded or Failed"
Jul 17 20:55:38.040: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 17 20:55:38.254: INFO: Waiting for pod pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c to disappear
Jul 17 20:55:38.357: INFO: Pod pod-projected-secrets-c1ce4949-18d9-49e9-9552-dde83d94aa5c no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.774 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:38.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-4fe48709-57df-45c6-9995-c2eb5343186a
STEP: Creating a pod to test consume secrets
Jul 17 20:55:39.310: INFO: Waiting up to 5m0s for pod "pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c" in namespace "secrets-4166" to be "Succeeded or Failed"
Jul 17 20:55:39.419: INFO: Pod "pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 108.985047ms
Jul 17 20:55:41.522: INFO: Pod "pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212505469s
Jul 17 20:55:43.627: INFO: Pod "pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.317316073s
STEP: Saw pod success
Jul 17 20:55:43.627: INFO: Pod "pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c" satisfied condition "Succeeded or Failed"
Jul 17 20:55:43.730: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c container secret-volume-test: <nil>
STEP: delete the pod
Jul 17 20:55:43.949: INFO: Waiting for pod pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c to disappear
Jul 17 20:55:44.052: INFO: Pod pod-secrets-cf8f3332-fc3c-4a57-bab9-ab9a6cc3be8c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.678 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:44.298: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 127 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:24.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 63 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:45.453: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 38 lines ...
Jul 17 20:55:14.376: INFO: PersistentVolumeClaim pvc-z4xgc found but phase is Pending instead of Bound.
Jul 17 20:55:16.480: INFO: PersistentVolumeClaim pvc-z4xgc found and phase=Bound (8.516800016s)
Jul 17 20:55:16.480: INFO: Waiting up to 3m0s for PersistentVolume local-g2xjb to have phase Bound
Jul 17 20:55:16.584: INFO: PersistentVolume local-g2xjb found and phase=Bound (103.892285ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rktr
STEP: Creating a pod to test atomic-volume-subpath
Jul 17 20:55:16.929: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rktr" in namespace "provisioning-6988" to be "Succeeded or Failed"
Jul 17 20:55:17.032: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Pending", Reason="", readiness=false. Elapsed: 103.169594ms
Jul 17 20:55:19.136: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207343626s
Jul 17 20:55:21.246: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316965156s
Jul 17 20:55:23.350: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421427933s
Jul 17 20:55:25.454: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524703537s
Jul 17 20:55:27.558: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Running", Reason="", readiness=true. Elapsed: 10.628722505s
... skipping 3 lines ...
Jul 17 20:55:35.981: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Running", Reason="", readiness=true. Elapsed: 19.051737437s
Jul 17 20:55:38.084: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Running", Reason="", readiness=true. Elapsed: 21.155124191s
Jul 17 20:55:40.188: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Running", Reason="", readiness=true. Elapsed: 23.258856134s
Jul 17 20:55:42.292: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Running", Reason="", readiness=true. Elapsed: 25.362689857s
Jul 17 20:55:44.396: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.467342758s
STEP: Saw pod success
Jul 17 20:55:44.396: INFO: Pod "pod-subpath-test-preprovisionedpv-rktr" satisfied condition "Succeeded or Failed"
Jul 17 20:55:44.499: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-rktr container test-container-subpath-preprovisionedpv-rktr: <nil>
STEP: delete the pod
Jul 17 20:55:44.712: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rktr to disappear
Jul 17 20:55:44.816: INFO: Pod pod-subpath-test-preprovisionedpv-rktr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rktr
Jul 17 20:55:44.816: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rktr" in namespace "provisioning-6988"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:46.263: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
• [SLOW TEST:13.158 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":28,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:46.989: INFO: Only supported for providers [gce gke] (not aws)
... skipping 116 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":3,"skipped":8,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:48.428: INFO: Driver "local" does not provide raw block - skipping
... skipping 96 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:46.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-35fcef85-1330-4755-86da-cdd6e356abdb
STEP: Creating a pod to test consume secrets
Jul 17 20:55:47.028: INFO: Waiting up to 5m0s for pod "pod-secrets-63856f8c-18c7-4daa-a008-bc7f02840ecd" in namespace "secrets-2548" to be "Succeeded or Failed"
Jul 17 20:55:47.131: INFO: Pod "pod-secrets-63856f8c-18c7-4daa-a008-bc7f02840ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 102.940754ms
Jul 17 20:55:49.237: INFO: Pod "pod-secrets-63856f8c-18c7-4daa-a008-bc7f02840ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208732911s
STEP: Saw pod success
Jul 17 20:55:49.237: INFO: Pod "pod-secrets-63856f8c-18c7-4daa-a008-bc7f02840ecd" satisfied condition "Succeeded or Failed"
Jul 17 20:55:49.341: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-secrets-63856f8c-18c7-4daa-a008-bc7f02840ecd container secret-volume-test: <nil>
STEP: delete the pod
Jul 17 20:55:49.575: INFO: Waiting for pod pod-secrets-63856f8c-18c7-4daa-a008-bc7f02840ecd to disappear
Jul 17 20:55:49.678: INFO: Pod pod-secrets-63856f8c-18c7-4daa-a008-bc7f02840ecd no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 31 lines ...
Jul 17 20:55:44.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 17 20:55:45.053: INFO: Waiting up to 5m0s for pod "downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05" in namespace "downward-api-8990" to be "Succeeded or Failed"
Jul 17 20:55:45.156: INFO: Pod "downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05": Phase="Pending", Reason="", readiness=false. Elapsed: 102.719596ms
Jul 17 20:55:47.260: INFO: Pod "downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20665831s
Jul 17 20:55:49.372: INFO: Pod "downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.318816033s
STEP: Saw pod success
Jul 17 20:55:49.372: INFO: Pod "downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05" satisfied condition "Succeeded or Failed"
Jul 17 20:55:49.476: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05 container dapi-container: <nil>
STEP: delete the pod
Jul 17 20:55:49.690: INFO: Waiting for pod downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05 to disappear
Jul 17 20:55:49.793: INFO: Pod downward-api-4e9a68d7-f7e5-4582-816e-6c65449d0c05 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.577 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:50.012: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:50.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9682" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":8,"skipped":46,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:50.364: INFO: Only supported for providers [openstack] (not aws)
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:50.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-4322" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:50.743: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Jul 17 20:55:46.110: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2104" to be "Succeeded or Failed"
Jul 17 20:55:46.212: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 101.979074ms
Jul 17 20:55:48.315: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 2.205147916s
Jul 17 20:55:50.418: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308367716s
STEP: Saw pod success
Jul 17 20:55:50.418: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul 17 20:55:50.520: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jul 17 20:55:50.739: INFO: Waiting for pod pod-host-path-test to disappear
Jul 17 20:55:50.841: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.552 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":6,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:51.067: INFO: Only supported for providers [azure] (not aws)
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:53.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4998" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":7,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:53.329: INFO: Only supported for providers [openstack] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 17 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:32.480: INFO: >>> kubeConfig: /root/.kube/config
... skipping 19 lines ...
Jul 17 20:55:44.358: INFO: PersistentVolumeClaim pvc-s9tkm found but phase is Pending instead of Bound.
Jul 17 20:55:46.464: INFO: PersistentVolumeClaim pvc-s9tkm found and phase=Bound (6.421838625s)
Jul 17 20:55:46.464: INFO: Waiting up to 3m0s for PersistentVolume local-gbvjv to have phase Bound
Jul 17 20:55:46.569: INFO: PersistentVolume local-gbvjv found and phase=Bound (104.556987ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dcnq
STEP: Creating a pod to test subpath
Jul 17 20:55:46.888: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dcnq" in namespace "provisioning-6937" to be "Succeeded or Failed"
Jul 17 20:55:46.994: INFO: Pod "pod-subpath-test-preprovisionedpv-dcnq": Phase="Pending", Reason="", readiness=false. Elapsed: 105.885173ms
Jul 17 20:55:49.101: INFO: Pod "pod-subpath-test-preprovisionedpv-dcnq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.211940696s
STEP: Saw pod success
Jul 17 20:55:49.101: INFO: Pod "pod-subpath-test-preprovisionedpv-dcnq" satisfied condition "Succeeded or Failed"
Jul 17 20:55:49.208: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-dcnq container test-container-subpath-preprovisionedpv-dcnq: <nil>
STEP: delete the pod
Jul 17 20:55:49.424: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dcnq to disappear
Jul 17 20:55:49.528: INFO: Pod pod-subpath-test-preprovisionedpv-dcnq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dcnq
Jul 17 20:55:49.528: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dcnq" in namespace "provisioning-6937"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":5,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:53.584: INFO: Only supported for providers [openstack] (not aws)
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:55:54.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-1727" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":8,"skipped":35,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:55:54.934: INFO: Only supported for providers [vsphere] (not aws)
... skipping 126 lines ...
Jul 17 20:55:53.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 17 20:55:54.228: INFO: Waiting up to 5m0s for pod "pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68" in namespace "emptydir-5604" to be "Succeeded or Failed"
Jul 17 20:55:54.332: INFO: Pod "pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68": Phase="Pending", Reason="", readiness=false. Elapsed: 104.253356ms
Jul 17 20:55:56.438: INFO: Pod "pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209625183s
Jul 17 20:55:58.543: INFO: Pod "pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314906903s
Jul 17 20:56:00.648: INFO: Pod "pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.420206286s
STEP: Saw pod success
Jul 17 20:56:00.648: INFO: Pod "pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68" satisfied condition "Succeeded or Failed"
Jul 17 20:56:00.753: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68 container test-container: <nil>
STEP: delete the pod
Jul 17 20:56:00.987: INFO: Waiting for pod pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68 to disappear
Jul 17 20:56:01.092: INFO: Pod pod-6f701aaf-87f1-4a57-9570-03ce3cf76c68 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.707 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:01.329: INFO: Only supported for providers [azure] (not aws)
... skipping 14 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":5,"skipped":48,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:00.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:56:01.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9890" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:01.645: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 202 lines ...
Jul 17 20:55:12.970: INFO: stderr: ""
Jul 17 20:55:12.970: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Jul 17 20:55:12.970: INFO: Waiting for all frontend pods to be Running.
Jul 17 20:55:23.120: INFO: Waiting for frontend to serve content.
Jul 17 20:55:23.231: INFO: Trying to add a new entry to the guestbook.
Jul 17 20:55:53.340: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: k8s

v1Statuso

FailureEerror trying to reach service: dial tcp 172.20.52.104:80: i/o timeout"ServiceUnavailable0�"
Jul 17 20:55:58.453: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
... skipping 31 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:01.884: INFO: Only supported for providers [azure] (not aws)
... skipping 56 lines ...
Jul 17 20:56:01.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 17 20:56:02.563: INFO: Waiting up to 5m0s for pod "pod-54ff44d7-8610-42a0-ae7d-05a10fbca44a" in namespace "emptydir-1233" to be "Succeeded or Failed"
Jul 17 20:56:02.666: INFO: Pod "pod-54ff44d7-8610-42a0-ae7d-05a10fbca44a": Phase="Pending", Reason="", readiness=false. Elapsed: 103.619478ms
Jul 17 20:56:04.771: INFO: Pod "pod-54ff44d7-8610-42a0-ae7d-05a10fbca44a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20791123s
STEP: Saw pod success
Jul 17 20:56:04.771: INFO: Pod "pod-54ff44d7-8610-42a0-ae7d-05a10fbca44a" satisfied condition "Succeeded or Failed"
Jul 17 20:56:04.875: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-54ff44d7-8610-42a0-ae7d-05a10fbca44a container test-container: <nil>
STEP: delete the pod
Jul 17 20:56:05.104: INFO: Waiting for pod pod-54ff44d7-8610-42a0-ae7d-05a10fbca44a to disappear
Jul 17 20:56:05.219: INFO: Pod pod-54ff44d7-8610-42a0-ae7d-05a10fbca44a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:56:05.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1233" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":32,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
Jul 17 20:54:38.115: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-3468
Jul 17 20:54:38.219: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-3468
Jul 17 20:54:38.324: INFO: creating *v1.StatefulSet: csi-mock-volumes-3468-6734/csi-mockplugin
Jul 17 20:54:38.431: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-3468
Jul 17 20:54:38.535: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-3468"
Jul 17 20:54:38.640: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-3468 to register on node ip-172-20-56-168.eu-west-3.compute.internal
I0717 20:54:42.856416   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0717 20:54:42.960858   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3468","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0717 20:54:43.070284   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0717 20:54:43.175099   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0717 20:54:43.403758   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-3468","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0717 20:54:44.221269   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-3468"},"Error":"","FullError":null}
STEP: Creating pod
Jul 17 20:54:48.767: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 17 20:54:48.873: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-p77f9] to have phase Bound
I0717 20:54:48.883785   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d05f235a-0336-4f42-9067-137ba0b366a8","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Jul 17 20:54:48.978: INFO: PersistentVolumeClaim pvc-p77f9 found but phase is Pending instead of Bound.
I0717 20:54:48.988785   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d05f235a-0336-4f42-9067-137ba0b366a8","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d05f235a-0336-4f42-9067-137ba0b366a8"}}},"Error":"","FullError":null}
Jul 17 20:54:51.083: INFO: PersistentVolumeClaim pvc-p77f9 found and phase=Bound (2.209255627s)
I0717 20:54:51.690800   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 17 20:54:51.796: INFO: >>> kubeConfig: /root/.kube/config
I0717 20:54:52.496736   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d05f235a-0336-4f42-9067-137ba0b366a8/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d05f235a-0336-4f42-9067-137ba0b366a8","storage.kubernetes.io/csiProvisionerIdentity":"1626555283224-8081-csi-mock-csi-mock-volumes-3468"}},"Response":{},"Error":"","FullError":null}
I0717 20:54:52.822805   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 17 20:54:52.936: INFO: >>> kubeConfig: /root/.kube/config
Jul 17 20:54:53.632: INFO: >>> kubeConfig: /root/.kube/config
Jul 17 20:54:54.324: INFO: >>> kubeConfig: /root/.kube/config
I0717 20:54:55.057766   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d05f235a-0336-4f42-9067-137ba0b366a8/globalmount","target_path":"/var/lib/kubelet/pods/6d0dafef-66bb-43ad-974a-372482d2da46/volumes/kubernetes.io~csi/pvc-d05f235a-0336-4f42-9067-137ba0b366a8/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d05f235a-0336-4f42-9067-137ba0b366a8","storage.kubernetes.io/csiProvisionerIdentity":"1626555283224-8081-csi-mock-csi-mock-volumes-3468"}},"Response":{},"Error":"","FullError":null}
Jul 17 20:54:57.606: INFO: Deleting pod "pvc-volume-tester-kfn7p" in namespace "csi-mock-volumes-3468"
Jul 17 20:54:57.711: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kfn7p" to be fully deleted
Jul 17 20:54:59.267: INFO: >>> kubeConfig: /root/.kube/config
I0717 20:55:00.004438   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/6d0dafef-66bb-43ad-974a-372482d2da46/volumes/kubernetes.io~csi/pvc-d05f235a-0336-4f42-9067-137ba0b366a8/mount"},"Response":{},"Error":"","FullError":null}
I0717 20:55:00.176605   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0717 20:55:00.280770   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d05f235a-0336-4f42-9067-137ba0b366a8/globalmount"},"Response":{},"Error":"","FullError":null}
I0717 20:55:04.078816   12470 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul 17 20:55:05.053: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-p77f9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3468", SelfLink:"", UID:"d05f235a-0336-4f42-9067-137ba0b366a8", ResourceVersion:"4196", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152088, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00322fe00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00322fe18)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0033f8b30), VolumeMode:(*v1.PersistentVolumeMode)(0xc0033f8b40), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 20:55:05.053: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-p77f9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3468", SelfLink:"", UID:"d05f235a-0336-4f42-9067-137ba0b366a8", ResourceVersion:"4197", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152088, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3468"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030a1cc8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030a1ce0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0030a1cf8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0030a1d10)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0032b8f50), VolumeMode:(*v1.PersistentVolumeMode)(0xc0032b8f60), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 20:55:05.054: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-p77f9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3468", SelfLink:"", UID:"d05f235a-0336-4f42-9067-137ba0b366a8", ResourceVersion:"4215", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152088, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3468"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000966ea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000966eb8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000966ed0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000966ee8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d05f235a-0336-4f42-9067-137ba0b366a8", StorageClassName:(*string)(0xc0037342e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0037342f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 20:55:05.054: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-p77f9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3468", SelfLink:"", UID:"d05f235a-0336-4f42-9067-137ba0b366a8", ResourceVersion:"4216", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152088, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3468"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e99ef0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e99f08)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e99f20), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e99f38)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d05f235a-0336-4f42-9067-137ba0b366a8", StorageClassName:(*string)(0xc003d15a30), VolumeMode:(*v1.PersistentVolumeMode)(0xc003d15a40), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 20:55:05.054: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-p77f9", GenerateName:"pvc-", Namespace:"csi-mock-volumes-3468", SelfLink:"", UID:"d05f235a-0336-4f42-9067-137ba0b366a8", ResourceVersion:"4799", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152088, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(0xc003e99f68), DeletionGracePeriodSeconds:(*int64)(0xc0014be938), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-3468"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e99f80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e99f98)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003e99fb0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003e99fc8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d05f235a-0336-4f42-9067-137ba0b366a8", StorageClassName:(*string)(0xc003d15a80), VolumeMode:(*v1.PersistentVolumeMode)(0xc003d15a90), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":4,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:06.965: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 85 lines ...
Jul 17 20:55:59.071: INFO: PersistentVolumeClaim pvc-wtwbw found but phase is Pending instead of Bound.
Jul 17 20:56:01.176: INFO: PersistentVolumeClaim pvc-wtwbw found and phase=Bound (8.552296426s)
Jul 17 20:56:01.177: INFO: Waiting up to 3m0s for PersistentVolume local-hzhkb to have phase Bound
Jul 17 20:56:01.280: INFO: PersistentVolume local-hzhkb found and phase=Bound (103.863525ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wsxv
STEP: Creating a pod to test subpath
Jul 17 20:56:01.593: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wsxv" in namespace "provisioning-7909" to be "Succeeded or Failed"
Jul 17 20:56:01.697: INFO: Pod "pod-subpath-test-preprovisionedpv-wsxv": Phase="Pending", Reason="", readiness=false. Elapsed: 104.203199ms
Jul 17 20:56:03.802: INFO: Pod "pod-subpath-test-preprovisionedpv-wsxv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20917765s
Jul 17 20:56:05.907: INFO: Pod "pod-subpath-test-preprovisionedpv-wsxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31377977s
STEP: Saw pod success
Jul 17 20:56:05.907: INFO: Pod "pod-subpath-test-preprovisionedpv-wsxv" satisfied condition "Succeeded or Failed"
Jul 17 20:56:06.014: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-wsxv container test-container-volume-preprovisionedpv-wsxv: <nil>
STEP: delete the pod
Jul 17 20:56:06.240: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wsxv to disappear
Jul 17 20:56:06.344: INFO: Pod pod-subpath-test-preprovisionedpv-wsxv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wsxv
Jul 17 20:56:06.344: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wsxv" in namespace "provisioning-7909"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":4,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:07.834: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:56:10.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6309" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:10.859: INFO: Only supported for providers [vsphere] (not aws)
... skipping 252 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":6,"skipped":43,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 50 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-8caeb73d-c040-42da-bb7e-977a5eb6ff7d
STEP: Creating secret with name secret-projected-all-test-volume-6e3159c0-97ba-4963-a359-63d6611d1159
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 17 20:56:15.766: INFO: Waiting up to 5m0s for pod "projected-volume-9d237219-c834-46d7-ba26-10840f4cf4bf" in namespace "projected-4814" to be "Succeeded or Failed"
Jul 17 20:56:15.869: INFO: Pod "projected-volume-9d237219-c834-46d7-ba26-10840f4cf4bf": Phase="Pending", Reason="", readiness=false. Elapsed: 102.420196ms
Jul 17 20:56:17.971: INFO: Pod "projected-volume-9d237219-c834-46d7-ba26-10840f4cf4bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.205206612s
STEP: Saw pod success
Jul 17 20:56:17.971: INFO: Pod "projected-volume-9d237219-c834-46d7-ba26-10840f4cf4bf" satisfied condition "Succeeded or Failed"
Jul 17 20:56:18.074: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod projected-volume-9d237219-c834-46d7-ba26-10840f4cf4bf container projected-all-volume-test: <nil>
STEP: delete the pod
Jul 17 20:56:18.285: INFO: Waiting for pod projected-volume-9d237219-c834-46d7-ba26-10840f4cf4bf to disappear
Jul 17 20:56:18.387: INFO: Pod projected-volume-9d237219-c834-46d7-ba26-10840f4cf4bf no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 13 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul 17 20:56:05.976: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 17 20:56:05.976: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-gpbd
STEP: Creating a pod to test subpath
Jul 17 20:56:06.083: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-gpbd" in namespace "provisioning-354" to be "Succeeded or Failed"
Jul 17 20:56:06.187: INFO: Pod "pod-subpath-test-inlinevolume-gpbd": Phase="Pending", Reason="", readiness=false. Elapsed: 103.599713ms
Jul 17 20:56:08.292: INFO: Pod "pod-subpath-test-inlinevolume-gpbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208978873s
Jul 17 20:56:10.399: INFO: Pod "pod-subpath-test-inlinevolume-gpbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315736999s
Jul 17 20:56:12.503: INFO: Pod "pod-subpath-test-inlinevolume-gpbd": Phase="Running", Reason="", readiness=false. Elapsed: 6.420180028s
Jul 17 20:56:14.608: INFO: Pod "pod-subpath-test-inlinevolume-gpbd": Phase="Running", Reason="", readiness=false. Elapsed: 8.525174304s
Jul 17 20:56:16.713: INFO: Pod "pod-subpath-test-inlinevolume-gpbd": Phase="Running", Reason="", readiness=false. Elapsed: 10.629601494s
Jul 17 20:56:18.820: INFO: Pod "pod-subpath-test-inlinevolume-gpbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.736549169s
STEP: Saw pod success
Jul 17 20:56:18.820: INFO: Pod "pod-subpath-test-inlinevolume-gpbd" satisfied condition "Succeeded or Failed"
Jul 17 20:56:18.923: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-gpbd container test-container-volume-inlinevolume-gpbd: <nil>
STEP: delete the pod
Jul 17 20:56:19.144: INFO: Waiting for pod pod-subpath-test-inlinevolume-gpbd to disappear
Jul 17 20:56:19.256: INFO: Pod pod-subpath-test-inlinevolume-gpbd no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-gpbd
Jul 17 20:56:19.256: INFO: Deleting pod "pod-subpath-test-inlinevolume-gpbd" in namespace "provisioning-354"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":33,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:55:49.894: INFO: >>> kubeConfig: /root/.kube/config
... skipping 6 lines ...
Jul 17 20:55:50.412: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-8263579gt
STEP: creating a claim
Jul 17 20:55:50.516: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Jul 17 20:55:50.724: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul 17 20:55:50.933: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:55:53.149: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:55:55.153: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:55:57.140: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:55:59.141: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:01.141: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:03.142: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:05.149: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:07.148: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:09.141: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:11.140: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:13.141: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:15.140: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:17.142: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:19.142: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:21.143: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-8263579gt",
  	... // 2 identical fields
  }

Jul 17 20:56:21.350: INFO: Error updating pvc aws5ph58: PersistentVolumeClaim "aws5ph58" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 102 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver aws doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":41,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:15.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
Jul 17 20:56:18.725: INFO: Creating a PV followed by a PVC
Jul 17 20:56:18.931: INFO: Waiting for PV local-pvkcn4w to bind to PVC pvc-fh9zz
Jul 17 20:56:18.931: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-fh9zz] to have phase Bound
Jul 17 20:56:19.034: INFO: PersistentVolumeClaim pvc-fh9zz found and phase=Bound (102.711026ms)
Jul 17 20:56:19.034: INFO: Waiting up to 3m0s for PersistentVolume local-pvkcn4w to have phase Bound
Jul 17 20:56:19.136: INFO: PersistentVolume local-pvkcn4w found and phase=Bound (101.817283ms)
[It] should fail scheduling due to different NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jul 17 20:56:19.349: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-2d40951f-26f8-46b3-aa4c-f42a3161b087] Namespace:persistent-local-volumes-test-8364 PodName:hostexec-ip-172-20-55-234.eu-west-3.compute.internal-5gwwr ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul 17 20:56:19.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:7.155 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeAffinity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:375
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity","total":-1,"completed":10,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:22.169: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 111 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should not deadlock when a pod's predecessor fails
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:250
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:23.605: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 113 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":79,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:25.298: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":46,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:18.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:7.045 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":8,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:983
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":6,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:25.673: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 163 lines ...
STEP: Destroying namespace "node-problem-detector-2912" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.740 seconds]
[sig-node] NodeProblemDetector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should run without error [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:55
------------------------------
... skipping 109 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:56:26.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8393" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":9,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:8.061 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":7,"skipped":34,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:86.287 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 109 lines ...
• [SLOW TEST:74.274 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:130
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":9,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:30.435: INFO: Driver hostPath doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 332 lines ...
• [SLOW TEST:83.261 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1211
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":8,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:38.157: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 103 lines ...
Jul 17 20:56:28.160: INFO: PersistentVolumeClaim pvc-6whpx found but phase is Pending instead of Bound.
Jul 17 20:56:30.262: INFO: PersistentVolumeClaim pvc-6whpx found and phase=Bound (2.204573395s)
Jul 17 20:56:30.263: INFO: Waiting up to 3m0s for PersistentVolume local-4t968 to have phase Bound
Jul 17 20:56:30.365: INFO: PersistentVolume local-4t968 found and phase=Bound (102.21395ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-ml8r
STEP: Creating a pod to test exec-volume-test
Jul 17 20:56:30.682: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-ml8r" in namespace "volume-1220" to be "Succeeded or Failed"
Jul 17 20:56:30.795: INFO: Pod "exec-volume-test-preprovisionedpv-ml8r": Phase="Pending", Reason="", readiness=false. Elapsed: 112.397073ms
Jul 17 20:56:32.900: INFO: Pod "exec-volume-test-preprovisionedpv-ml8r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218196284s
Jul 17 20:56:35.003: INFO: Pod "exec-volume-test-preprovisionedpv-ml8r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320398362s
Jul 17 20:56:37.106: INFO: Pod "exec-volume-test-preprovisionedpv-ml8r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.423544803s
STEP: Saw pod success
Jul 17 20:56:37.106: INFO: Pod "exec-volume-test-preprovisionedpv-ml8r" satisfied condition "Succeeded or Failed"
Jul 17 20:56:37.208: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-ml8r container exec-container-preprovisionedpv-ml8r: <nil>
STEP: delete the pod
Jul 17 20:56:37.419: INFO: Waiting for pod exec-volume-test-preprovisionedpv-ml8r to disappear
Jul 17 20:56:37.520: INFO: Pod exec-volume-test-preprovisionedpv-ml8r no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-ml8r
Jul 17 20:56:37.521: INFO: Deleting pod "exec-volume-test-preprovisionedpv-ml8r" in namespace "volume-1220"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":11,"skipped":47,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:38.878: INFO: Only supported for providers [gce gke] (not aws)
... skipping 90 lines ...
Jul 17 20:56:27.952: INFO: PersistentVolumeClaim pvc-jgnds found but phase is Pending instead of Bound.
Jul 17 20:56:30.057: INFO: PersistentVolumeClaim pvc-jgnds found and phase=Bound (12.74166829s)
Jul 17 20:56:30.057: INFO: Waiting up to 3m0s for PersistentVolume local-66lb4 to have phase Bound
Jul 17 20:56:30.161: INFO: PersistentVolume local-66lb4 found and phase=Bound (104.121199ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xtcl
STEP: Creating a pod to test subpath
Jul 17 20:56:30.476: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xtcl" in namespace "provisioning-1932" to be "Succeeded or Failed"
Jul 17 20:56:30.580: INFO: Pod "pod-subpath-test-preprovisionedpv-xtcl": Phase="Pending", Reason="", readiness=false. Elapsed: 104.020823ms
Jul 17 20:56:32.686: INFO: Pod "pod-subpath-test-preprovisionedpv-xtcl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209570386s
Jul 17 20:56:34.791: INFO: Pod "pod-subpath-test-preprovisionedpv-xtcl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315013765s
STEP: Saw pod success
Jul 17 20:56:34.791: INFO: Pod "pod-subpath-test-preprovisionedpv-xtcl" satisfied condition "Succeeded or Failed"
Jul 17 20:56:34.896: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-xtcl container test-container-subpath-preprovisionedpv-xtcl: <nil>
STEP: delete the pod
Jul 17 20:56:35.109: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xtcl to disappear
Jul 17 20:56:35.214: INFO: Pod pod-subpath-test-preprovisionedpv-xtcl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xtcl
Jul 17 20:56:35.214: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xtcl" in namespace "provisioning-1932"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":52,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:38.957: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 99 lines ...
• [SLOW TEST:13.945 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":42,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:26.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:470
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":7,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul 17 20:56:39.521: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 17 20:56:39.627: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-prf9
STEP: Creating a pod to test subpath
Jul 17 20:56:39.735: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-prf9" in namespace "provisioning-2254" to be "Succeeded or Failed"
Jul 17 20:56:39.842: INFO: Pod "pod-subpath-test-inlinevolume-prf9": Phase="Pending", Reason="", readiness=false. Elapsed: 106.39817ms
Jul 17 20:56:41.946: INFO: Pod "pod-subpath-test-inlinevolume-prf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211165457s
Jul 17 20:56:44.052: INFO: Pod "pod-subpath-test-inlinevolume-prf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316437284s
STEP: Saw pod success
Jul 17 20:56:44.052: INFO: Pod "pod-subpath-test-inlinevolume-prf9" satisfied condition "Succeeded or Failed"
Jul 17 20:56:44.156: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-prf9 container test-container-subpath-inlinevolume-prf9: <nil>
STEP: delete the pod
Jul 17 20:56:44.371: INFO: Waiting for pod pod-subpath-test-inlinevolume-prf9 to disappear
Jul 17 20:56:44.475: INFO: Pod pod-subpath-test-inlinevolume-prf9 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-prf9
Jul 17 20:56:44.475: INFO: Deleting pod "pod-subpath-test-inlinevolume-prf9" in namespace "provisioning-2254"
... skipping 31 lines ...
Jul 17 20:55:29.075: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-1938rz2zf      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1938    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1938rz2zf,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1938    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1938rz2zf,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1938    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1938rz2zf,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-1938rz2zf    46de1191-2f98-49bb-ae1d-997049a6e9e5 5825 0 2021-07-17 20:55:29 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-07-17 20:55:29 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-27wtj pvc- provisioning-1938  88b0dad3-9cf5-4f73-9696-d18e83740fbf 5838 0 2021-07-17 20:55:29 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-07-17 20:55:29 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-1938rz2zf,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Deleting pod pod-98cabdc5-e87f-4403-bb6a-2a6e289e9130 in namespace provisioning-1938
STEP: checking the created volume is writable on node {Name: Selector:map[] Affinity:nil}
Jul 17 20:55:50.224: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-writer-5svjx" in namespace "provisioning-1938" to be "Succeeded or Failed"
Jul 17 20:55:50.326: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 101.912975ms
Jul 17 20:55:52.429: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205314846s
Jul 17 20:55:54.533: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308399937s
Jul 17 20:55:56.636: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412255173s
Jul 17 20:55:58.739: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515318604s
Jul 17 20:56:00.842: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618330164s
... skipping 2 lines ...
Jul 17 20:56:07.158: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 16.933891581s
Jul 17 20:56:09.262: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.037759093s
Jul 17 20:56:11.366: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 21.141521184s
Jul 17 20:56:13.469: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Pending", Reason="", readiness=false. Elapsed: 23.244947567s
Jul 17 20:56:15.572: INFO: Pod "pvc-volume-tester-writer-5svjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.347985987s
STEP: Saw pod success
Jul 17 20:56:15.572: INFO: Pod "pvc-volume-tester-writer-5svjx" satisfied condition "Succeeded or Failed"
Jul 17 20:56:15.784: INFO: Pod pvc-volume-tester-writer-5svjx has the following logs: 
Jul 17 20:56:15.784: INFO: Deleting pod "pvc-volume-tester-writer-5svjx" in namespace "provisioning-1938"
Jul 17 20:56:15.890: INFO: Wait up to 5m0s for pod "pvc-volume-tester-writer-5svjx" to be fully deleted
STEP: checking the created volume has the correct mount options, is readable and retains data on the same node "ip-172-20-38-184.eu-west-3.compute.internal"
Jul 17 20:56:16.308: INFO: Waiting up to 15m0s for pod "pvc-volume-tester-reader-22qp4" in namespace "provisioning-1938" to be "Succeeded or Failed"
Jul 17 20:56:16.411: INFO: Pod "pvc-volume-tester-reader-22qp4": Phase="Pending", Reason="", readiness=false. Elapsed: 102.348403ms
Jul 17 20:56:18.514: INFO: Pod "pvc-volume-tester-reader-22qp4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20548404s
Jul 17 20:56:20.617: INFO: Pod "pvc-volume-tester-reader-22qp4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308908787s
Jul 17 20:56:22.719: INFO: Pod "pvc-volume-tester-reader-22qp4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411191716s
Jul 17 20:56:24.823: INFO: Pod "pvc-volume-tester-reader-22qp4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514291834s
Jul 17 20:56:26.938: INFO: Pod "pvc-volume-tester-reader-22qp4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.629832613s
Jul 17 20:56:29.040: INFO: Pod "pvc-volume-tester-reader-22qp4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.732237958s
STEP: Saw pod success
Jul 17 20:56:29.041: INFO: Pod "pvc-volume-tester-reader-22qp4" satisfied condition "Succeeded or Failed"
Jul 17 20:56:29.252: INFO: Pod pvc-volume-tester-reader-22qp4 has the following logs: hello world

Jul 17 20:56:29.252: INFO: Deleting pod "pvc-volume-tester-reader-22qp4" in namespace "provisioning-1938"
Jul 17 20:56:29.357: INFO: Wait up to 5m0s for pod "pvc-volume-tester-reader-22qp4" to be fully deleted
Jul 17 20:56:29.460: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-27wtj] to have phase Bound
Jul 17 20:56:29.562: INFO: PersistentVolumeClaim pvc-27wtj found and phase=Bound (102.394149ms)
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":4,"skipped":21,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 118 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":5,"skipped":49,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:7.548 seconds]
[sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":12,"skipped":52,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:46.466: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 19 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [sig-cli] Kubectl Port forwarding
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:33.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename port-forwarding
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:47.706: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 75 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":51,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:48.288: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":8,"skipped":60,"failed":0}
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:44.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-fc5e9cf6-8529-4cfd-84a4-bc451b1e3bba
STEP: Creating a pod to test consume configMaps
Jul 17 20:56:45.657: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00b8913e-198b-4952-b376-e8311885e5a7" in namespace "projected-6872" to be "Succeeded or Failed"
Jul 17 20:56:45.766: INFO: Pod "pod-projected-configmaps-00b8913e-198b-4952-b376-e8311885e5a7": Phase="Pending", Reason="", readiness=false. Elapsed: 108.03842ms
Jul 17 20:56:47.870: INFO: Pod "pod-projected-configmaps-00b8913e-198b-4952-b376-e8311885e5a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212899765s
STEP: Saw pod success
Jul 17 20:56:47.870: INFO: Pod "pod-projected-configmaps-00b8913e-198b-4952-b376-e8311885e5a7" satisfied condition "Succeeded or Failed"
Jul 17 20:56:47.975: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-projected-configmaps-00b8913e-198b-4952-b376-e8311885e5a7 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 20:56:48.191: INFO: Waiting for pod pod-projected-configmaps-00b8913e-198b-4952-b376-e8311885e5a7 to disappear
Jul 17 20:56:48.295: INFO: Pod pod-projected-configmaps-00b8913e-198b-4952-b376-e8311885e5a7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:56:49.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5818" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:49.834: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 56 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":60,"failed":0}
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:48.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
Jul 17 20:56:29.422: INFO: PersistentVolumeClaim pvc-chzgd found but phase is Pending instead of Bound.
Jul 17 20:56:31.530: INFO: PersistentVolumeClaim pvc-chzgd found and phase=Bound (2.215548417s)
Jul 17 20:56:31.530: INFO: Waiting up to 3m0s for PersistentVolume aws-8x8h2 to have phase Bound
Jul 17 20:56:31.634: INFO: PersistentVolume aws-8x8h2 found and phase=Bound (103.836132ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-dcrc
STEP: Creating a pod to test exec-volume-test
Jul 17 20:56:31.950: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-dcrc" in namespace "volume-6608" to be "Succeeded or Failed"
Jul 17 20:56:32.055: INFO: Pod "exec-volume-test-preprovisionedpv-dcrc": Phase="Pending", Reason="", readiness=false. Elapsed: 105.308736ms
Jul 17 20:56:34.161: INFO: Pod "exec-volume-test-preprovisionedpv-dcrc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210875241s
Jul 17 20:56:36.271: INFO: Pod "exec-volume-test-preprovisionedpv-dcrc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321483296s
Jul 17 20:56:38.379: INFO: Pod "exec-volume-test-preprovisionedpv-dcrc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429401243s
Jul 17 20:56:40.484: INFO: Pod "exec-volume-test-preprovisionedpv-dcrc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534139341s
STEP: Saw pod success
Jul 17 20:56:40.484: INFO: Pod "exec-volume-test-preprovisionedpv-dcrc" satisfied condition "Succeeded or Failed"
Jul 17 20:56:40.588: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-dcrc container exec-container-preprovisionedpv-dcrc: <nil>
STEP: delete the pod
Jul 17 20:56:40.804: INFO: Waiting for pod exec-volume-test-preprovisionedpv-dcrc to disappear
Jul 17 20:56:40.908: INFO: Pod exec-volume-test-preprovisionedpv-dcrc no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-dcrc
Jul 17 20:56:40.908: INFO: Deleting pod "exec-volume-test-preprovisionedpv-dcrc" in namespace "volume-6608"
STEP: Deleting pv and pvc
Jul 17 20:56:41.015: INFO: Deleting PersistentVolumeClaim "pvc-chzgd"
Jul 17 20:56:41.119: INFO: Deleting PersistentVolume "aws-8x8h2"
Jul 17 20:56:41.417: INFO: Couldn't delete PD "aws://eu-west-3a/vol-07437fea47e856c22", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-07437fea47e856c22 is currently attached to i-074f7d61809cdb109
	status code: 400, request id: af8194ae-e8e0-4d45-b6c9-be9d17c4a64b
Jul 17 20:56:47.025: INFO: Couldn't delete PD "aws://eu-west-3a/vol-07437fea47e856c22", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-07437fea47e856c22 is currently attached to i-074f7d61809cdb109
	status code: 400, request id: 8c6c0a66-35ff-448c-93b0-caca02cb3eb1
Jul 17 20:56:52.594: INFO: Successfully deleted PD "aws://eu-west-3a/vol-07437fea47e856c22".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:56:52.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6608" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":8,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:56:52.816: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
Jul 17 20:56:13.724: INFO: PersistentVolumeClaim pvc-w5hrr found but phase is Pending instead of Bound.
Jul 17 20:56:15.829: INFO: PersistentVolumeClaim pvc-w5hrr found and phase=Bound (6.418814033s)
Jul 17 20:56:15.829: INFO: Waiting up to 3m0s for PersistentVolume aws-kfbs9 to have phase Bound
Jul 17 20:56:15.934: INFO: PersistentVolume aws-kfbs9 found and phase=Bound (105.100871ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-k2lh
STEP: Creating a pod to test exec-volume-test
Jul 17 20:56:16.246: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-k2lh" in namespace "volume-6625" to be "Succeeded or Failed"
Jul 17 20:56:16.350: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 103.86579ms
Jul 17 20:56:18.454: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207955755s
Jul 17 20:56:20.559: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312201445s
Jul 17 20:56:22.663: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416911417s
Jul 17 20:56:24.770: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523559289s
Jul 17 20:56:26.875: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.628669039s
Jul 17 20:56:28.982: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.735368524s
Jul 17 20:56:31.110: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.864056905s
Jul 17 20:56:33.218: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.972003834s
Jul 17 20:56:35.323: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.07650916s
STEP: Saw pod success
Jul 17 20:56:35.323: INFO: Pod "exec-volume-test-preprovisionedpv-k2lh" satisfied condition "Succeeded or Failed"
Jul 17 20:56:35.426: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-k2lh container exec-container-preprovisionedpv-k2lh: <nil>
STEP: delete the pod
Jul 17 20:56:35.652: INFO: Waiting for pod exec-volume-test-preprovisionedpv-k2lh to disappear
Jul 17 20:56:35.756: INFO: Pod exec-volume-test-preprovisionedpv-k2lh no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-k2lh
Jul 17 20:56:35.756: INFO: Deleting pod "exec-volume-test-preprovisionedpv-k2lh" in namespace "volume-6625"
STEP: Deleting pv and pvc
Jul 17 20:56:35.860: INFO: Deleting PersistentVolumeClaim "pvc-w5hrr"
Jul 17 20:56:35.966: INFO: Deleting PersistentVolume "aws-kfbs9"
Jul 17 20:56:36.450: INFO: Couldn't delete PD "aws://eu-west-3a/vol-05304ce2c79d81c73", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05304ce2c79d81c73 is currently attached to i-0c87dd0e7e7e66410
	status code: 400, request id: 02acfb8b-7f8c-41dd-988e-0c6b55771f1d
Jul 17 20:56:42.084: INFO: Couldn't delete PD "aws://eu-west-3a/vol-05304ce2c79d81c73", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05304ce2c79d81c73 is currently attached to i-0c87dd0e7e7e66410
	status code: 400, request id: 2e01d40e-60e7-4741-ba8a-964732d46eed
Jul 17 20:56:47.667: INFO: Couldn't delete PD "aws://eu-west-3a/vol-05304ce2c79d81c73", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05304ce2c79d81c73 is currently attached to i-0c87dd0e7e7e66410
	status code: 400, request id: b418e273-c833-4c0c-ab02-47ca0e1a46d2
Jul 17 20:56:53.200: INFO: Couldn't delete PD "aws://eu-west-3a/vol-05304ce2c79d81c73", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05304ce2c79d81c73 is currently attached to i-0c87dd0e7e7e66410
	status code: 400, request id: 0174708e-c6c8-4c8f-832d-fcfb264db24c
Jul 17 20:56:58.756: INFO: Successfully deleted PD "aws://eu-west-3a/vol-05304ce2c79d81c73".
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:56:58.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-6625" for this suite.
... skipping 18 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Jul 17 20:56:53.494: INFO: Waiting up to 5m0s for pod "pod-always-succeed77cd8ebf-0f36-441c-bf39-708c0cff9077" in namespace "pods-1835" to be "Succeeded or Failed"
Jul 17 20:56:53.597: INFO: Pod "pod-always-succeed77cd8ebf-0f36-441c-bf39-708c0cff9077": Phase="Pending", Reason="", readiness=false. Elapsed: 103.537219ms
Jul 17 20:56:55.702: INFO: Pod "pod-always-succeed77cd8ebf-0f36-441c-bf39-708c0cff9077": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208328252s
Jul 17 20:56:57.807: INFO: Pod "pod-always-succeed77cd8ebf-0f36-441c-bf39-708c0cff9077": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31351373s
STEP: Saw pod success
Jul 17 20:56:57.808: INFO: Pod "pod-always-succeed77cd8ebf-0f36-441c-bf39-708c0cff9077" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:57:00.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":9,"skipped":45,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
Jul 17 20:56:42.914: INFO: PersistentVolumeClaim pvc-zx9db found but phase is Pending instead of Bound.
Jul 17 20:56:45.019: INFO: PersistentVolumeClaim pvc-zx9db found and phase=Bound (2.207613231s)
Jul 17 20:56:45.019: INFO: Waiting up to 3m0s for PersistentVolume local-szvpg to have phase Bound
Jul 17 20:56:45.122: INFO: PersistentVolume local-szvpg found and phase=Bound (102.951007ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-kv26
STEP: Creating a pod to test subpath
Jul 17 20:56:45.438: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kv26" in namespace "provisioning-7487" to be "Succeeded or Failed"
Jul 17 20:56:45.542: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Pending", Reason="", readiness=false. Elapsed: 103.700133ms
Jul 17 20:56:47.645: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207152831s
Jul 17 20:56:49.753: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31515058s
Jul 17 20:56:51.857: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418883364s
Jul 17 20:56:53.962: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523742959s
Jul 17 20:56:56.066: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.627656355s
STEP: Saw pod success
Jul 17 20:56:56.066: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26" satisfied condition "Succeeded or Failed"
Jul 17 20:56:56.169: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-kv26 container test-container-subpath-preprovisionedpv-kv26: <nil>
STEP: delete the pod
Jul 17 20:56:56.381: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kv26 to disappear
Jul 17 20:56:56.486: INFO: Pod pod-subpath-test-preprovisionedpv-kv26 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kv26
Jul 17 20:56:56.486: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kv26" in namespace "provisioning-7487"
STEP: Creating pod pod-subpath-test-preprovisionedpv-kv26
STEP: Creating a pod to test subpath
Jul 17 20:56:56.693: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-kv26" in namespace "provisioning-7487" to be "Succeeded or Failed"
Jul 17 20:56:56.796: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Pending", Reason="", readiness=false. Elapsed: 103.134052ms
Jul 17 20:56:58.901: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207520544s
STEP: Saw pod success
Jul 17 20:56:58.901: INFO: Pod "pod-subpath-test-preprovisionedpv-kv26" satisfied condition "Succeeded or Failed"
Jul 17 20:56:59.004: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-kv26 container test-container-subpath-preprovisionedpv-kv26: <nil>
STEP: delete the pod
Jul 17 20:56:59.218: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-kv26 to disappear
Jul 17 20:56:59.322: INFO: Pod pod-subpath-test-preprovisionedpv-kv26 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-kv26
Jul 17 20:56:59.322: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-kv26" in namespace "provisioning-7487"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:02.325: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 60 lines ...
      Driver aws doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":10,"skipped":59,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:41.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:22.443 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a local redirect http liveness probe
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":11,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:04.384: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
... skipping 17 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":4,"skipped":12,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:34.096: INFO: >>> kubeConfig: /root/.kube/config
... skipping 42 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:05.417: INFO: Only supported for providers [gce gke] (not aws)
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":57,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:9.410 seconds]
[sig-node] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":10,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:10.469: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul 17 20:57:06.085: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 17 20:57:06.085: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-qb6b
STEP: Creating a pod to test subpath
Jul 17 20:57:06.191: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-qb6b" in namespace "provisioning-2364" to be "Succeeded or Failed"
Jul 17 20:57:06.294: INFO: Pod "pod-subpath-test-inlinevolume-qb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.177912ms
Jul 17 20:57:08.398: INFO: Pod "pod-subpath-test-inlinevolume-qb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207216286s
Jul 17 20:57:10.504: INFO: Pod "pod-subpath-test-inlinevolume-qb6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313334757s
STEP: Saw pod success
Jul 17 20:57:10.504: INFO: Pod "pod-subpath-test-inlinevolume-qb6b" satisfied condition "Succeeded or Failed"
Jul 17 20:57:10.609: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-qb6b container test-container-subpath-inlinevolume-qb6b: <nil>
STEP: delete the pod
Jul 17 20:57:10.838: INFO: Waiting for pod pod-subpath-test-inlinevolume-qb6b to disappear
Jul 17 20:57:10.941: INFO: Pod pod-subpath-test-inlinevolume-qb6b no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-qb6b
Jul 17 20:57:10.941: INFO: Deleting pod "pod-subpath-test-inlinevolume-qb6b" in namespace "provisioning-2364"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":12,"skipped":65,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:11.375: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 141 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should not require VolumeAttach for drivers without attachment
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment","total":-1,"completed":9,"skipped":87,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:15.047: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":4,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:16.607: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":7,"skipped":67,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:57:05.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-b387ff00-3216-49a0-9570-9cfa2b77ea41
STEP: Creating a pod to test consume secrets
Jul 17 20:57:06.200: INFO: Waiting up to 5m0s for pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67" in namespace "secrets-8700" to be "Succeeded or Failed"
Jul 17 20:57:06.305: INFO: Pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67": Phase="Pending", Reason="", readiness=false. Elapsed: 104.375037ms
Jul 17 20:57:08.411: INFO: Pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210209799s
Jul 17 20:57:10.518: INFO: Pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317647346s
Jul 17 20:57:12.624: INFO: Pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.424171371s
Jul 17 20:57:14.730: INFO: Pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67": Phase="Running", Reason="", readiness=true. Elapsed: 8.529780496s
Jul 17 20:57:16.835: INFO: Pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.634955051s
STEP: Saw pod success
Jul 17 20:57:16.835: INFO: Pod "pod-secrets-641e989e-39bd-4d35-8676-65c025622e67" satisfied condition "Succeeded or Failed"
Jul 17 20:57:16.940: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-secrets-641e989e-39bd-4d35-8676-65c025622e67 container secret-volume-test: <nil>
STEP: delete the pod
Jul 17 20:57:17.155: INFO: Waiting for pod pod-secrets-641e989e-39bd-4d35-8676-65c025622e67 to disappear
Jul 17 20:57:17.261: INFO: Pod pod-secrets-641e989e-39bd-4d35-8676-65c025622e67 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.036 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:14.805 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":12,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:18.388: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 180 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Jul 17 20:57:17.325: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-1549aef3-dabc-48f8-bf49-e15b5305cf7d" in namespace "security-context-test-7674" to be "Succeeded or Failed"
Jul 17 20:57:17.430: INFO: Pod "busybox-readonly-true-1549aef3-dabc-48f8-bf49-e15b5305cf7d": Phase="Pending", Reason="", readiness=false. Elapsed: 104.480354ms
Jul 17 20:57:19.535: INFO: Pod "busybox-readonly-true-1549aef3-dabc-48f8-bf49-e15b5305cf7d": Phase="Failed", Reason="", readiness=false. Elapsed: 2.209530949s
Jul 17 20:57:19.535: INFO: Pod "busybox-readonly-true-1549aef3-dabc-48f8-bf49-e15b5305cf7d" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:57:19.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7674" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":8,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:19.759: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 152 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":9,"skipped":69,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":5,"skipped":33,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:21.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:62.189 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a failing exec liveness probe that took longer than the timeout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":6,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:24.079: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":10,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:52.799: INFO: >>> kubeConfig: /root/.kube/config
... skipping 5 lines ...
Jul 17 20:56:53.318: INFO: Creating resource for dynamic PV
Jul 17 20:56:53.318: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-2244ldwwn
STEP: creating a claim
STEP: Expanding non-expandable pvc
Jul 17 20:56:53.635: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul 17 20:56:53.846: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:56:56.057: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:56:58.060: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:00.066: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:02.059: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:04.061: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:06.057: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:08.056: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:10.056: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:12.057: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:14.056: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:16.056: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:18.058: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:20.056: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:22.058: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:24.056: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-2244ldwwn",
  	... // 2 identical fields
  }

Jul 17 20:57:24.264: INFO: Error updating pvc awskw9c4: PersistentVolumeClaim "awskw9c4" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":11,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:24.807: INFO: Only supported for providers [gce gke] (not aws)
... skipping 69 lines ...
Jul 17 20:57:22.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 17 20:57:22.698: INFO: Waiting up to 5m0s for pod "pod-91651fdf-8418-4ef9-bb2f-33f09e39fe3e" in namespace "emptydir-7906" to be "Succeeded or Failed"
Jul 17 20:57:22.806: INFO: Pod "pod-91651fdf-8418-4ef9-bb2f-33f09e39fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 107.352688ms
Jul 17 20:57:24.911: INFO: Pod "pod-91651fdf-8418-4ef9-bb2f-33f09e39fe3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.212044748s
STEP: Saw pod success
Jul 17 20:57:24.911: INFO: Pod "pod-91651fdf-8418-4ef9-bb2f-33f09e39fe3e" satisfied condition "Succeeded or Failed"
Jul 17 20:57:25.015: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-91651fdf-8418-4ef9-bb2f-33f09e39fe3e container test-container: <nil>
STEP: delete the pod
Jul 17 20:57:25.233: INFO: Waiting for pod pod-91651fdf-8418-4ef9-bb2f-33f09e39fe3e to disappear
Jul 17 20:57:25.337: INFO: Pod pod-91651fdf-8418-4ef9-bb2f-33f09e39fe3e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 69 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":10,"skipped":85,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:25.677: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 20 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:57:28.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2885" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":74,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:29.086: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 61 lines ...
• [SLOW TEST:16.127 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":10,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] AppArmor
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
    Only supported for node OS distro [gci ubuntu] (not debian)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:56:58.977: INFO: >>> kubeConfig: /root/.kube/config
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":18,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:57:05.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 50 lines ...
• [SLOW TEST:29.427 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:34.914: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 73 lines ...
Jul 17 20:57:33.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul 17 20:57:33.699: INFO: Waiting up to 5m0s for pod "security-context-dce017df-6d42-4c8b-9a9d-b1b1106c096f" in namespace "security-context-9267" to be "Succeeded or Failed"
Jul 17 20:57:33.802: INFO: Pod "security-context-dce017df-6d42-4c8b-9a9d-b1b1106c096f": Phase="Pending", Reason="", readiness=false. Elapsed: 103.504452ms
Jul 17 20:57:35.907: INFO: Pod "security-context-dce017df-6d42-4c8b-9a9d-b1b1106c096f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208025528s
STEP: Saw pod success
Jul 17 20:57:35.907: INFO: Pod "security-context-dce017df-6d42-4c8b-9a9d-b1b1106c096f" satisfied condition "Succeeded or Failed"
Jul 17 20:57:36.013: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod security-context-dce017df-6d42-4c8b-9a9d-b1b1106c096f container test-container: <nil>
STEP: delete the pod
Jul 17 20:57:36.228: INFO: Waiting for pod security-context-dce017df-6d42-4c8b-9a9d-b1b1106c096f to disappear
Jul 17 20:57:36.332: INFO: Pod security-context-dce017df-6d42-4c8b-9a9d-b1b1106c096f no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:57:36.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9267" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":7,"skipped":22,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:36.563: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
Jul 17 20:56:52.420: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-rlzjb] to have phase Bound
Jul 17 20:56:52.524: INFO: PersistentVolumeClaim pvc-rlzjb found and phase=Bound (104.076854ms)
STEP: Deleting the previously created pod
Jul 17 20:57:11.046: INFO: Deleting pod "pvc-volume-tester-bnz98" in namespace "csi-mock-volumes-6294"
Jul 17 20:57:11.152: INFO: Wait up to 5m0s for pod "pvc-volume-tester-bnz98" to be fully deleted
STEP: Checking CSI driver logs
Jul 17 20:57:21.472: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e2b4e437-ae1e-4520-b102-53f5caa73ebf/volumes/kubernetes.io~csi/pvc-4af00aac-e10d-4905-ae74-edfd271dbb65/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-bnz98
Jul 17 20:57:21.472: INFO: Deleting pod "pvc-volume-tester-bnz98" in namespace "csi-mock-volumes-6294"
STEP: Deleting claim pvc-rlzjb
Jul 17 20:57:21.806: INFO: Waiting up to 2m0s for PersistentVolume pvc-4af00aac-e10d-4905-ae74-edfd271dbb65 to get deleted
Jul 17 20:57:21.911: INFO: PersistentVolume pvc-4af00aac-e10d-4905-ae74-edfd271dbb65 was removed
STEP: Deleting storageclass csi-mock-volumes-6294-scswxj4
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1374
    token should not be plumbed down when csiServiceAccountTokenEnabled=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1402
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":8,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:44.054: INFO: Only supported for providers [azure] (not aws)
... skipping 63 lines ...
• [SLOW TEST:15.769 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":7,"skipped":37,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 23 lines ...
Jul 17 20:57:44.406: INFO: PersistentVolumeClaim pvc-2f6sp found but phase is Pending instead of Bound.
Jul 17 20:57:46.511: INFO: PersistentVolumeClaim pvc-2f6sp found and phase=Bound (12.73404367s)
Jul 17 20:57:46.511: INFO: Waiting up to 3m0s for PersistentVolume local-6sl77 to have phase Bound
Jul 17 20:57:46.615: INFO: PersistentVolume local-6sl77 found and phase=Bound (103.913213ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-2qnl
STEP: Creating a pod to test exec-volume-test
Jul 17 20:57:46.928: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-2qnl" in namespace "volume-4947" to be "Succeeded or Failed"
Jul 17 20:57:47.032: INFO: Pod "exec-volume-test-preprovisionedpv-2qnl": Phase="Pending", Reason="", readiness=false. Elapsed: 104.344035ms
Jul 17 20:57:49.138: INFO: Pod "exec-volume-test-preprovisionedpv-2qnl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209707019s
STEP: Saw pod success
Jul 17 20:57:49.138: INFO: Pod "exec-volume-test-preprovisionedpv-2qnl" satisfied condition "Succeeded or Failed"
Jul 17 20:57:49.242: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-2qnl container exec-container-preprovisionedpv-2qnl: <nil>
STEP: delete the pod
Jul 17 20:57:49.462: INFO: Waiting for pod exec-volume-test-preprovisionedpv-2qnl to disappear
Jul 17 20:57:49.566: INFO: Pod exec-volume-test-preprovisionedpv-2qnl no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-2qnl
Jul 17 20:57:49.566: INFO: Deleting pod "exec-volume-test-preprovisionedpv-2qnl" in namespace "volume-4947"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":13,"skipped":80,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:54.870: INFO: Only supported for providers [gce gke] (not aws)
... skipping 179 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with pvc data source
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:238
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":6,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:57:58.442: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 139 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:58:01.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7725" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":41,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:01.839: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 80 lines ...
Jul 17 20:58:02.588: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.730 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 27 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-5225/configmap-test-27fc6fbd-b3d0-44c4-89c6-867dfc3b05b5
STEP: Creating a pod to test consume configMaps
Jul 17 20:57:55.614: INFO: Waiting up to 5m0s for pod "pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd" in namespace "configmap-5225" to be "Succeeded or Failed"
Jul 17 20:57:55.717: INFO: Pod "pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 103.049504ms
Jul 17 20:57:57.820: INFO: Pod "pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206615158s
Jul 17 20:57:59.927: INFO: Pod "pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312936698s
Jul 17 20:58:02.030: INFO: Pod "pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416581899s
STEP: Saw pod success
Jul 17 20:58:02.030: INFO: Pod "pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd" satisfied condition "Succeeded or Failed"
Jul 17 20:58:02.134: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd container env-test: <nil>
STEP: delete the pod
Jul 17 20:58:02.356: INFO: Waiting for pod pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd to disappear
Jul 17 20:58:02.459: INFO: Pod pod-configmaps-539e2f0f-2521-4f85-9595-2e4039316bcd no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.781 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:53.863 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":13,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":28,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:05.317: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 155 lines ...
Jul 17 20:57:58.494: INFO: PersistentVolumeClaim pvc-hn564 found but phase is Pending instead of Bound.
Jul 17 20:58:00.598: INFO: PersistentVolumeClaim pvc-hn564 found and phase=Bound (10.626738745s)
Jul 17 20:58:00.598: INFO: Waiting up to 3m0s for PersistentVolume local-tz4pk to have phase Bound
Jul 17 20:58:00.702: INFO: PersistentVolume local-tz4pk found and phase=Bound (103.897486ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wjmd
STEP: Creating a pod to test subpath
Jul 17 20:58:01.029: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wjmd" in namespace "provisioning-4595" to be "Succeeded or Failed"
Jul 17 20:58:01.132: INFO: Pod "pod-subpath-test-preprovisionedpv-wjmd": Phase="Pending", Reason="", readiness=false. Elapsed: 103.745763ms
Jul 17 20:58:03.239: INFO: Pod "pod-subpath-test-preprovisionedpv-wjmd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210433125s
Jul 17 20:58:05.344: INFO: Pod "pod-subpath-test-preprovisionedpv-wjmd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315673203s
STEP: Saw pod success
Jul 17 20:58:05.344: INFO: Pod "pod-subpath-test-preprovisionedpv-wjmd" satisfied condition "Succeeded or Failed"
Jul 17 20:58:05.448: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-wjmd container test-container-volume-preprovisionedpv-wjmd: <nil>
STEP: delete the pod
Jul 17 20:58:05.676: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wjmd to disappear
Jul 17 20:58:05.780: INFO: Pod pod-subpath-test-preprovisionedpv-wjmd no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wjmd
Jul 17 20:58:05.780: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wjmd" in namespace "provisioning-4595"
... skipping 190 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":4,"skipped":17,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:08.195: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:58:08.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5616" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":6,"skipped":59,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:08.440: INFO: Only supported for providers [gce gke] (not aws)
... skipping 21 lines ...
Jul 17 20:58:05.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 17 20:58:05.900: INFO: Waiting up to 5m0s for pod "pod-8bb700de-f159-4647-82ac-845fecbe1f32" in namespace "emptydir-9253" to be "Succeeded or Failed"
Jul 17 20:58:06.003: INFO: Pod "pod-8bb700de-f159-4647-82ac-845fecbe1f32": Phase="Pending", Reason="", readiness=false. Elapsed: 103.301808ms
Jul 17 20:58:08.107: INFO: Pod "pod-8bb700de-f159-4647-82ac-845fecbe1f32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20695508s
STEP: Saw pod success
Jul 17 20:58:08.107: INFO: Pod "pod-8bb700de-f159-4647-82ac-845fecbe1f32" satisfied condition "Succeeded or Failed"
Jul 17 20:58:08.210: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-8bb700de-f159-4647-82ac-845fecbe1f32 container test-container: <nil>
STEP: delete the pod
Jul 17 20:58:08.422: INFO: Waiting for pod pod-8bb700de-f159-4647-82ac-845fecbe1f32 to disappear
Jul 17 20:58:08.525: INFO: Pod pod-8bb700de-f159-4647-82ac-845fecbe1f32 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:58:08.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9253" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":70,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":9,"skipped":54,"failed":0}
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:58:07.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename apply
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "apply-5905" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist","total":-1,"completed":10,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:09.055: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 54 lines ...
Jul 17 20:58:02.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
Jul 17 20:58:03.123: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 17 20:58:03.333: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9265" in namespace "provisioning-9265" to be "Succeeded or Failed"
Jul 17 20:58:03.440: INFO: Pod "hostpath-symlink-prep-provisioning-9265": Phase="Pending", Reason="", readiness=false. Elapsed: 106.629496ms
Jul 17 20:58:05.543: INFO: Pod "hostpath-symlink-prep-provisioning-9265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209903504s
STEP: Saw pod success
Jul 17 20:58:05.543: INFO: Pod "hostpath-symlink-prep-provisioning-9265" satisfied condition "Succeeded or Failed"
Jul 17 20:58:05.543: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9265" in namespace "provisioning-9265"
Jul 17 20:58:05.650: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9265" to be fully deleted
Jul 17 20:58:05.753: INFO: Creating resource for inline volume
Jul 17 20:58:05.753: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Jul 17 20:58:05.754: INFO: Deleting pod "pod-subpath-test-inlinevolume-w482" in namespace "provisioning-9265"
Jul 17 20:58:05.961: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9265" in namespace "provisioning-9265" to be "Succeeded or Failed"
Jul 17 20:58:06.065: INFO: Pod "hostpath-symlink-prep-provisioning-9265": Phase="Pending", Reason="", readiness=false. Elapsed: 103.561857ms
Jul 17 20:58:08.170: INFO: Pod "hostpath-symlink-prep-provisioning-9265": Phase="Running", Reason="", readiness=true. Elapsed: 2.208561608s
Jul 17 20:58:10.274: INFO: Pod "hostpath-symlink-prep-provisioning-9265": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312408036s
STEP: Saw pod success
Jul 17 20:58:10.274: INFO: Pod "hostpath-symlink-prep-provisioning-9265" satisfied condition "Succeeded or Failed"
Jul 17 20:58:10.274: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9265" in namespace "provisioning-9265"
Jul 17 20:58:10.384: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9265" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:58:10.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9265" for this suite.
... skipping 42 lines ...
Jul 17 20:58:08.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Jul 17 20:58:09.070: INFO: Waiting up to 5m0s for pod "var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f" in namespace "var-expansion-7593" to be "Succeeded or Failed"
Jul 17 20:58:09.174: INFO: Pod "var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f": Phase="Pending", Reason="", readiness=false. Elapsed: 103.93986ms
Jul 17 20:58:11.277: INFO: Pod "var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206600895s
Jul 17 20:58:13.379: INFO: Pod "var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309147422s
Jul 17 20:58:15.482: INFO: Pod "var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.412283285s
STEP: Saw pod success
Jul 17 20:58:15.482: INFO: Pod "var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f" satisfied condition "Succeeded or Failed"
Jul 17 20:58:15.585: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f container dapi-container: <nil>
STEP: delete the pod
Jul 17 20:58:15.804: INFO: Waiting for pod var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f to disappear
Jul 17 20:58:15.907: INFO: Pod var-expansion-4dfa541d-1486-4790-af41-ea41a82b410f no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.662 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:12.974 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":5,"skipped":30,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:21.275: INFO: Only supported for providers [vsphere] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: vsphere]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [vsphere] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1437
------------------------------
... skipping 175 lines ...
Jul 17 20:58:17.013: INFO: Waiting for pod aws-client to disappear
Jul 17 20:58:17.116: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Jul 17 20:58:17.116: INFO: Deleting PersistentVolumeClaim "pvc-8pjlh"
Jul 17 20:58:17.220: INFO: Deleting PersistentVolume "aws-wvkkv"
Jul 17 20:58:17.896: INFO: Couldn't delete PD "aws://eu-west-3a/vol-059956d6e73f8a4b3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-059956d6e73f8a4b3 is currently attached to i-02ba71dc56c4adb77
	status code: 400, request id: d4e68faa-7495-4720-9bf4-6a8dfcb482b2
Jul 17 20:58:23.460: INFO: Successfully deleted PD "aws://eu-west-3a/vol-059956d6e73f8a4b3".
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:58:23.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-366" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":13,"skipped":58,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:23.701: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 45 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 17 20:58:16.747: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2" in namespace "security-context-test-1828" to be "Succeeded or Failed"
Jul 17 20:58:16.849: INFO: Pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 102.008307ms
Jul 17 20:58:18.952: INFO: Pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204863788s
Jul 17 20:58:21.054: INFO: Pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307680288s
Jul 17 20:58:23.158: INFO: Pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411252294s
Jul 17 20:58:25.262: INFO: Pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51546588s
Jul 17 20:58:27.366: INFO: Pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.618912961s
Jul 17 20:58:27.366: INFO: Pod "busybox-readonly-false-700d657c-b9a4-45ea-97ff-a24f2c33cfe2" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:58:27.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1828" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a pod with readOnlyRootFilesystem
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:27.664: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-0fa353a2-6ab2-4f87-b917-73f3e1321f6a
STEP: Creating a pod to test consume secrets
Jul 17 20:58:22.071: INFO: Waiting up to 5m0s for pod "pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf" in namespace "secrets-7803" to be "Succeeded or Failed"
Jul 17 20:58:22.175: INFO: Pod "pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf": Phase="Pending", Reason="", readiness=false. Elapsed: 103.493727ms
Jul 17 20:58:24.279: INFO: Pod "pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207537977s
Jul 17 20:58:26.382: INFO: Pod "pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311346863s
Jul 17 20:58:28.487: INFO: Pod "pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.416216668s
STEP: Saw pod success
Jul 17 20:58:28.487: INFO: Pod "pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf" satisfied condition "Succeeded or Failed"
Jul 17 20:58:28.591: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf container secret-volume-test: <nil>
STEP: delete the pod
Jul 17 20:58:28.811: INFO: Waiting for pod pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf to disappear
Jul 17 20:58:28.914: INFO: Pod pod-secrets-68445bf4-2f28-42b4-83fc-f95d29541ddf no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.778 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Jul 17 20:58:27.542: INFO: Creating a PV followed by a PVC
Jul 17 20:58:27.750: INFO: Waiting for PV local-pv462hm to bind to PVC pvc-cd6b4
Jul 17 20:58:27.750: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-cd6b4] to have phase Bound
Jul 17 20:58:27.852: INFO: PersistentVolumeClaim pvc-cd6b4 found and phase=Bound (102.583901ms)
Jul 17 20:58:27.853: INFO: Waiting up to 3m0s for PersistentVolume local-pv462hm to have phase Bound
Jul 17 20:58:27.955: INFO: PersistentVolume local-pv462hm found and phase=Bound (102.910819ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jul 17 20:58:28.161: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-01abd27c-bb9b-449f-9993-654a82b66aeb] Namespace:persistent-local-volumes-test-8450 PodName:hostexec-ip-172-20-55-234.eu-west-3.compute.internal-mzth6 ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Jul 17 20:58:28.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating local PVCs and PVs
... skipping 22 lines ...

• [SLOW TEST:7.201 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":14,"skipped":62,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:30.938: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
... skipping 47 lines ...
Jul 17 20:57:15.107: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-2514 to register on node ip-172-20-36-75.eu-west-3.compute.internal
STEP: Creating pod
Jul 17 20:57:31.985: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 17 20:57:32.091: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-nb6wn] to have phase Bound
Jul 17 20:57:32.196: INFO: PersistentVolumeClaim pvc-nb6wn found and phase=Bound (104.466511ms)
STEP: checking for CSIInlineVolumes feature
Jul 17 20:57:48.933: INFO: Error getting logs for pod inline-volume-79k6p: the server rejected our request for an unknown reason (get pods inline-volume-79k6p)
Jul 17 20:57:49.140: INFO: Deleting pod "inline-volume-79k6p" in namespace "csi-mock-volumes-2514"
Jul 17 20:57:49.246: INFO: Wait up to 5m0s for pod "inline-volume-79k6p" to be fully deleted
STEP: Deleting the previously created pod
Jul 17 20:57:53.455: INFO: Deleting pod "pvc-volume-tester-45vl7" in namespace "csi-mock-volumes-2514"
Jul 17 20:57:53.561: INFO: Wait up to 5m0s for pod "pvc-volume-tester-45vl7" to be fully deleted
STEP: Checking CSI driver logs
Jul 17 20:58:05.876: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Jul 17 20:58:05.876: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-45vl7
Jul 17 20:58:05.876: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-2514
Jul 17 20:58:05.876: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 2a4ab2bb-0883-46af-baec-638f36abf99b
Jul 17 20:58:05.876: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jul 17 20:58:05.876: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/2a4ab2bb-0883-46af-baec-638f36abf99b/volumes/kubernetes.io~csi/pvc-311637a3-d0a9-4584-800c-7572057de6f1/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-45vl7
Jul 17 20:58:05.877: INFO: Deleting pod "pvc-volume-tester-45vl7" in namespace "csi-mock-volumes-2514"
STEP: Deleting claim pvc-nb6wn
Jul 17 20:58:06.190: INFO: Waiting up to 2m0s for PersistentVolume pvc-311637a3-d0a9-4584-800c-7572057de6f1 to get deleted
Jul 17 20:58:06.294: INFO: PersistentVolume pvc-311637a3-d0a9-4584-800c-7572057de6f1 was removed
STEP: Deleting storageclass csi-mock-volumes-2514-sccmmts
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should be passed when podInfoOnMount=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":11,"skipped":63,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:251.336 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":14,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:37.822: INFO: Only supported for providers [vsphere] (not aws)
... skipping 151 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1436
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1460
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":13,"skipped":83,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:40.349: INFO: Driver emptydir doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 166 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":5,"skipped":38,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":11,"skipped":92,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:58:02.510: INFO: >>> kubeConfig: /root/.kube/config
... skipping 15 lines ...
Jul 17 20:58:12.866: INFO: PersistentVolumeClaim pvc-msvvp found but phase is Pending instead of Bound.
Jul 17 20:58:14.976: INFO: PersistentVolumeClaim pvc-msvvp found and phase=Bound (8.52328281s)
Jul 17 20:58:14.976: INFO: Waiting up to 3m0s for PersistentVolume local-zk2gm to have phase Bound
Jul 17 20:58:15.078: INFO: PersistentVolume local-zk2gm found and phase=Bound (101.648368ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-z7bz
STEP: Creating a pod to test subpath
Jul 17 20:58:15.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z7bz" in namespace "provisioning-2776" to be "Succeeded or Failed"
Jul 17 20:58:15.488: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 101.92405ms
Jul 17 20:58:17.592: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205481919s
Jul 17 20:58:19.694: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308114982s
Jul 17 20:58:21.798: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411481433s
Jul 17 20:58:23.902: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515557309s
Jul 17 20:58:26.005: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618800375s
Jul 17 20:58:28.116: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.729599696s
Jul 17 20:58:30.219: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.83252028s
Jul 17 20:58:32.322: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.93604525s
Jul 17 20:58:34.426: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.03997825s
Jul 17 20:58:36.529: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.142924599s
STEP: Saw pod success
Jul 17 20:58:36.529: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz" satisfied condition "Succeeded or Failed"
Jul 17 20:58:36.631: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-z7bz container test-container-subpath-preprovisionedpv-z7bz: <nil>
STEP: delete the pod
Jul 17 20:58:36.842: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z7bz to disappear
Jul 17 20:58:36.944: INFO: Pod pod-subpath-test-preprovisionedpv-z7bz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z7bz
Jul 17 20:58:36.944: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z7bz" in namespace "provisioning-2776"
STEP: Creating pod pod-subpath-test-preprovisionedpv-z7bz
STEP: Creating a pod to test subpath
Jul 17 20:58:37.152: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-z7bz" in namespace "provisioning-2776" to be "Succeeded or Failed"
Jul 17 20:58:37.254: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 101.830441ms
Jul 17 20:58:39.357: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205098234s
Jul 17 20:58:41.460: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Running", Reason="", readiness=true. Elapsed: 4.30765389s
Jul 17 20:58:43.563: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.411564135s
STEP: Saw pod success
Jul 17 20:58:43.564: INFO: Pod "pod-subpath-test-preprovisionedpv-z7bz" satisfied condition "Succeeded or Failed"
Jul 17 20:58:43.666: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-z7bz container test-container-subpath-preprovisionedpv-z7bz: <nil>
STEP: delete the pod
Jul 17 20:58:43.878: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-z7bz to disappear
Jul 17 20:58:43.980: INFO: Pod pod-subpath-test-preprovisionedpv-z7bz no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-z7bz
Jul 17 20:58:43.980: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-z7bz" in namespace "provisioning-2776"
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":12,"skipped":65,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:46.703: INFO: Only supported for providers [azure] (not aws)
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":7,"skipped":28,"failed":0}
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:58:10.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 17 20:58:11.339: INFO: Waiting up to 5m0s for pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4" in namespace "downward-api-119" to be "Succeeded or Failed"
Jul 17 20:58:11.441: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 102.301236ms
Jul 17 20:58:13.544: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205383288s
Jul 17 20:58:15.648: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308684962s
Jul 17 20:58:17.754: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414631852s
Jul 17 20:58:19.857: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5179524s
Jul 17 20:58:21.960: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620712649s
... skipping 10 lines ...
Jul 17 20:58:45.101: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.761744052s
Jul 17 20:58:47.204: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.865442197s
Jul 17 20:58:49.309: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.970013603s
Jul 17 20:58:51.412: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.072753384s
Jul 17 20:58:53.514: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.175269378s
STEP: Saw pod success
Jul 17 20:58:53.514: INFO: Pod "downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4" satisfied condition "Succeeded or Failed"
Jul 17 20:58:53.617: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4 container dapi-container: <nil>
STEP: delete the pod
Jul 17 20:58:53.831: INFO: Waiting for pod downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4 to disappear
Jul 17 20:58:53.933: INFO: Pod downward-api-dcbe2c31-3195-438d-b887-f8f18b2e19e4 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:43.416 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:54.161: INFO: Only supported for providers [gce gke] (not aws)
... skipping 220 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":9,"skipped":84,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":12,"skipped":92,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:58:45.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":13,"skipped":92,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:58:55.907: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 86 lines ...
Jul 17 20:58:43.011: INFO: PersistentVolumeClaim pvc-7s55r found but phase is Pending instead of Bound.
Jul 17 20:58:45.114: INFO: PersistentVolumeClaim pvc-7s55r found and phase=Bound (4.310176415s)
Jul 17 20:58:45.114: INFO: Waiting up to 3m0s for PersistentVolume local-mtwhl to have phase Bound
Jul 17 20:58:45.217: INFO: PersistentVolume local-mtwhl found and phase=Bound (103.33132ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lf67
STEP: Creating a pod to test subpath
Jul 17 20:58:45.529: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lf67" in namespace "provisioning-8282" to be "Succeeded or Failed"
Jul 17 20:58:45.632: INFO: Pod "pod-subpath-test-preprovisionedpv-lf67": Phase="Pending", Reason="", readiness=false. Elapsed: 103.473036ms
Jul 17 20:58:47.736: INFO: Pod "pod-subpath-test-preprovisionedpv-lf67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207241631s
Jul 17 20:58:49.840: INFO: Pod "pod-subpath-test-preprovisionedpv-lf67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311514881s
Jul 17 20:58:51.946: INFO: Pod "pod-subpath-test-preprovisionedpv-lf67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416879036s
Jul 17 20:58:54.049: INFO: Pod "pod-subpath-test-preprovisionedpv-lf67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.520340504s
Jul 17 20:58:56.153: INFO: Pod "pod-subpath-test-preprovisionedpv-lf67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.624464336s
STEP: Saw pod success
Jul 17 20:58:56.153: INFO: Pod "pod-subpath-test-preprovisionedpv-lf67" satisfied condition "Succeeded or Failed"
Jul 17 20:58:56.256: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-lf67 container test-container-subpath-preprovisionedpv-lf67: <nil>
STEP: delete the pod
Jul 17 20:58:57.210: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lf67 to disappear
Jul 17 20:58:57.313: INFO: Pod pod-subpath-test-preprovisionedpv-lf67 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lf67
Jul 17 20:58:57.313: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lf67" in namespace "provisioning-8282"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":15,"skipped":63,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Multi-AZ Cluster Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 82 lines ...
STEP: Deleting pod hostexec-ip-172-20-55-234.eu-west-3.compute.internal-t5gcf in namespace volumemode-6537
Jul 17 20:58:46.187: INFO: Deleting pod "pod-d8a8787f-b62f-4343-b771-530b17331a4a" in namespace "volumemode-6537"
Jul 17 20:58:46.292: INFO: Wait up to 5m0s for pod "pod-d8a8787f-b62f-4343-b771-530b17331a4a" to be fully deleted
STEP: Deleting pv and pvc
Jul 17 20:58:54.497: INFO: Deleting PersistentVolumeClaim "pvc-fn6zf"
Jul 17 20:58:54.628: INFO: Deleting PersistentVolume "aws-tthlz"
Jul 17 20:58:54.973: INFO: Couldn't delete PD "aws://eu-west-3a/vol-02c071bc2ab08ea77", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-02c071bc2ab08ea77 is currently attached to i-074f7d61809cdb109
	status code: 400, request id: cc5ec7c2-bf76-4ec2-b1bb-044c1275195c
Jul 17 20:59:00.530: INFO: Successfully deleted PD "aws://eu-west-3a/vol-02c071bc2ab08ea77".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:00.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volumemode-6537" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":15,"skipped":76,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:00.750: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 29 lines ...
Jul 17 20:58:09.593: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8799w2wqf
STEP: creating a claim
Jul 17 20:58:09.698: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-z9mt
STEP: Creating a pod to test subpath
Jul 17 20:58:10.013: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-z9mt" in namespace "provisioning-8799" to be "Succeeded or Failed"
Jul 17 20:58:10.117: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 103.906336ms
Jul 17 20:58:12.223: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210305662s
Jul 17 20:58:14.328: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314585027s
Jul 17 20:58:16.436: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422861676s
Jul 17 20:58:18.540: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52709476s
Jul 17 20:58:20.645: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.631763856s
... skipping 4 lines ...
Jul 17 20:58:31.171: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 21.157868922s
Jul 17 20:58:33.275: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 23.262324197s
Jul 17 20:58:35.380: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 25.366788625s
Jul 17 20:58:37.485: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Pending", Reason="", readiness=false. Elapsed: 27.471493669s
Jul 17 20:58:39.590: INFO: Pod "pod-subpath-test-dynamicpv-z9mt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.576633585s
STEP: Saw pod success
Jul 17 20:58:39.590: INFO: Pod "pod-subpath-test-dynamicpv-z9mt" satisfied condition "Succeeded or Failed"
Jul 17 20:58:39.694: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-z9mt container test-container-subpath-dynamicpv-z9mt: <nil>
STEP: delete the pod
Jul 17 20:58:39.907: INFO: Waiting for pod pod-subpath-test-dynamicpv-z9mt to disappear
Jul 17 20:58:40.011: INFO: Pod pod-subpath-test-dynamicpv-z9mt no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-z9mt
Jul 17 20:58:40.011: INFO: Deleting pod "pod-subpath-test-dynamicpv-z9mt" in namespace "provisioning-8799"
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":11,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:01.413: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 144 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-ce36884c-97e9-4f62-9bad-f54bcbc98716
STEP: Creating a pod to test consume configMaps
Jul 17 20:58:28.395: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0" in namespace "configmap-7116" to be "Succeeded or Failed"
Jul 17 20:58:28.497: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 101.988553ms
Jul 17 20:58:30.600: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204887717s
Jul 17 20:58:32.705: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310307306s
Jul 17 20:58:34.809: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.413520076s
Jul 17 20:58:36.912: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516807043s
Jul 17 20:58:39.016: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.620712616s
... skipping 6 lines ...
Jul 17 20:58:53.749: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 25.354102037s
Jul 17 20:58:55.852: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 27.4567649s
Jul 17 20:58:57.955: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 29.559721042s
Jul 17 20:59:00.058: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.663393756s
Jul 17 20:59:02.161: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.765843807s
STEP: Saw pod success
Jul 17 20:59:02.161: INFO: Pod "pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0" satisfied condition "Succeeded or Failed"
Jul 17 20:59:02.263: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 20:59:02.476: INFO: Waiting for pod pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0 to disappear
Jul 17 20:59:02.578: INFO: Pod pod-configmaps-7ce3c39a-273e-4ef0-a7b6-d956c4e0deb0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 152 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:59
STEP: Creating configMap with name configmap-test-volume-014ff870-2379-46a7-ac2c-478c6bcf930a
STEP: Creating a pod to test consume configMaps
Jul 17 20:58:56.740: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980" in namespace "configmap-9284" to be "Succeeded or Failed"
Jul 17 20:58:56.843: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Pending", Reason="", readiness=false. Elapsed: 102.055034ms
Jul 17 20:58:58.946: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205194547s
Jul 17 20:59:01.048: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307927227s
Jul 17 20:59:03.151: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410749833s
Jul 17 20:59:05.255: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514141177s
Jul 17 20:59:07.358: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617661481s
Jul 17 20:59:09.462: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Pending", Reason="", readiness=false. Elapsed: 12.721530136s
Jul 17 20:59:11.566: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.825286741s
STEP: Saw pod success
Jul 17 20:59:11.566: INFO: Pod "pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980" satisfied condition "Succeeded or Failed"
Jul 17 20:59:11.668: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 20:59:11.878: INFO: Waiting for pod pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980 to disappear
Jul 17 20:59:11.980: INFO: Pod pod-configmaps-b7d61911-2c27-4619-a5f1-f354522eb980 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 82 lines ...
Jul 17 20:58:59.141: INFO: PersistentVolumeClaim pvc-rg9j6 found but phase is Pending instead of Bound.
Jul 17 20:59:01.246: INFO: PersistentVolumeClaim pvc-rg9j6 found and phase=Bound (10.626198368s)
Jul 17 20:59:01.246: INFO: Waiting up to 3m0s for PersistentVolume local-bx7l4 to have phase Bound
Jul 17 20:59:01.350: INFO: PersistentVolume local-bx7l4 found and phase=Bound (103.776465ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4p7p
STEP: Creating a pod to test subpath
Jul 17 20:59:01.663: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4p7p" in namespace "provisioning-2643" to be "Succeeded or Failed"
Jul 17 20:59:01.767: INFO: Pod "pod-subpath-test-preprovisionedpv-4p7p": Phase="Pending", Reason="", readiness=false. Elapsed: 104.074278ms
Jul 17 20:59:03.872: INFO: Pod "pod-subpath-test-preprovisionedpv-4p7p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208894077s
Jul 17 20:59:06.000: INFO: Pod "pod-subpath-test-preprovisionedpv-4p7p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336797388s
Jul 17 20:59:08.105: INFO: Pod "pod-subpath-test-preprovisionedpv-4p7p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442109508s
Jul 17 20:59:10.210: INFO: Pod "pod-subpath-test-preprovisionedpv-4p7p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.546670466s
STEP: Saw pod success
Jul 17 20:59:10.210: INFO: Pod "pod-subpath-test-preprovisionedpv-4p7p" satisfied condition "Succeeded or Failed"
Jul 17 20:59:10.313: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-4p7p container test-container-subpath-preprovisionedpv-4p7p: <nil>
STEP: delete the pod
Jul 17 20:59:10.526: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4p7p to disappear
Jul 17 20:59:10.630: INFO: Pod pod-subpath-test-preprovisionedpv-4p7p no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4p7p
Jul 17 20:59:10.630: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4p7p" in namespace "provisioning-2643"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":13,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:12.711: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:59:12.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul 17 20:59:12.920: INFO: Waiting up to 5m0s for pod "security-context-bc9daa67-e5d8-4e52-9ed3-cfba9956f1bc" in namespace "security-context-6559" to be "Succeeded or Failed"
Jul 17 20:59:13.024: INFO: Pod "security-context-bc9daa67-e5d8-4e52-9ed3-cfba9956f1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 104.294712ms
Jul 17 20:59:15.128: INFO: Pod "security-context-bc9daa67-e5d8-4e52-9ed3-cfba9956f1bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208506416s
STEP: Saw pod success
Jul 17 20:59:15.128: INFO: Pod "security-context-bc9daa67-e5d8-4e52-9ed3-cfba9956f1bc" satisfied condition "Succeeded or Failed"
Jul 17 20:59:15.232: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod security-context-bc9daa67-e5d8-4e52-9ed3-cfba9956f1bc container test-container: <nil>
STEP: delete the pod
Jul 17 20:59:15.449: INFO: Waiting for pod security-context-bc9daa67-e5d8-4e52-9ed3-cfba9956f1bc to disappear
Jul 17 20:59:15.553: INFO: Pod security-context-bc9daa67-e5d8-4e52-9ed3-cfba9956f1bc no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:15.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-6559" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":93,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:15.773: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 107 lines ...
STEP: Destroying namespace "services-6819" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":5,"skipped":27,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":73,"failed":0}
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:57:25.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63
W0717 20:57:26.189136   12361 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
[It] should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-7530" for this suite.


• [SLOW TEST:111.586 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:294
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":11,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:17.166: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-8a1dbc79-7365-4578-beff-5434d1248b86
STEP: Creating a pod to test consume secrets
Jul 17 20:59:13.470: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6" in namespace "projected-1552" to be "Succeeded or Failed"
Jul 17 20:59:13.574: INFO: Pod "pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6": Phase="Pending", Reason="", readiness=false. Elapsed: 103.435017ms
Jul 17 20:59:15.678: INFO: Pod "pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6": Phase="Running", Reason="", readiness=true. Elapsed: 2.20762963s
Jul 17 20:59:17.784: INFO: Pod "pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313723309s
STEP: Saw pod success
Jul 17 20:59:17.784: INFO: Pod "pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6" satisfied condition "Succeeded or Failed"
Jul 17 20:59:17.892: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 17 20:59:18.120: INFO: Waiting for pod pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6 to disappear
Jul 17 20:59:18.223: INFO: Pod pod-projected-secrets-4fd2f331-11f2-494c-a5c2-ee8089966ef6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.699 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":85,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:18.461: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 76 lines ...
• [SLOW TEST:50.009 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:19.181: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 35 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":16,"skipped":71,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:59:02.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Container restart
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:130
    should verify that container can restart successfully after configmaps modified
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:131
------------------------------
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":8,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:22.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-454" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":8,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":39,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":82,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:22.956: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 46 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:22.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":9,"skipped":63,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:23.097: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 102 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":14,"skipped":88,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:59:01.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jul 17 20:59:02.127: INFO: Waiting up to 5m0s for pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad" in namespace "security-context-9542" to be "Succeeded or Failed"
Jul 17 20:59:02.230: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 102.036249ms
Jul 17 20:59:04.333: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205407466s
Jul 17 20:59:06.436: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308012552s
Jul 17 20:59:08.546: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418679417s
Jul 17 20:59:10.649: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.521932651s
Jul 17 20:59:12.753: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Running", Reason="", readiness=true. Elapsed: 10.625715958s
Jul 17 20:59:14.857: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Running", Reason="", readiness=true. Elapsed: 12.729705043s
Jul 17 20:59:16.968: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Running", Reason="", readiness=true. Elapsed: 14.840560798s
Jul 17 20:59:19.072: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Running", Reason="", readiness=true. Elapsed: 16.944513419s
Jul 17 20:59:21.176: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Running", Reason="", readiness=true. Elapsed: 19.048852534s
Jul 17 20:59:23.283: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.155397319s
STEP: Saw pod success
Jul 17 20:59:23.283: INFO: Pod "security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad" satisfied condition "Succeeded or Failed"
Jul 17 20:59:23.386: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad container test-container: <nil>
STEP: delete the pod
Jul 17 20:59:23.606: INFO: Waiting for pod security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad to disappear
Jul 17 20:59:23.708: INFO: Pod security-context-82d1e39e-9fc7-4d10-b396-c230a40ef8ad no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:22.415 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":15,"skipped":88,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:23.953: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 14 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [sig-node] kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:58:08.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 150 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Clean up pods on node
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279
    kubelet should be able to delete 10 pods per node in 1m0s.
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341
------------------------------
{"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":9,"skipped":48,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 131 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":7,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":15,"skipped":89,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:29.606: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
Jul 17 20:59:13.549: INFO: PersistentVolumeClaim pvc-z2mgp found but phase is Pending instead of Bound.
Jul 17 20:59:15.652: INFO: PersistentVolumeClaim pvc-z2mgp found and phase=Bound (10.616208303s)
Jul 17 20:59:15.652: INFO: Waiting up to 3m0s for PersistentVolume local-xgn6v to have phase Bound
Jul 17 20:59:15.756: INFO: PersistentVolume local-xgn6v found and phase=Bound (103.338276ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nlrj
STEP: Creating a pod to test subpath
Jul 17 20:59:16.066: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nlrj" in namespace "provisioning-4802" to be "Succeeded or Failed"
Jul 17 20:59:16.168: INFO: Pod "pod-subpath-test-preprovisionedpv-nlrj": Phase="Pending", Reason="", readiness=false. Elapsed: 102.678324ms
Jul 17 20:59:18.271: INFO: Pod "pod-subpath-test-preprovisionedpv-nlrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205359094s
Jul 17 20:59:20.373: INFO: Pod "pod-subpath-test-preprovisionedpv-nlrj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307632152s
Jul 17 20:59:22.477: INFO: Pod "pod-subpath-test-preprovisionedpv-nlrj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411121107s
Jul 17 20:59:24.582: INFO: Pod "pod-subpath-test-preprovisionedpv-nlrj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.516752429s
Jul 17 20:59:26.685: INFO: Pod "pod-subpath-test-preprovisionedpv-nlrj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.619802619s
STEP: Saw pod success
Jul 17 20:59:26.686: INFO: Pod "pod-subpath-test-preprovisionedpv-nlrj" satisfied condition "Succeeded or Failed"
Jul 17 20:59:26.787: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-nlrj container test-container-subpath-preprovisionedpv-nlrj: <nil>
STEP: delete the pod
Jul 17 20:59:27.090: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nlrj to disappear
Jul 17 20:59:27.195: INFO: Pod pod-subpath-test-preprovisionedpv-nlrj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nlrj
Jul 17 20:59:27.195: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nlrj" in namespace "provisioning-4802"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":9,"skipped":55,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Jul 17 20:59:13.546: INFO: PersistentVolumeClaim pvc-h5xtq found but phase is Pending instead of Bound.
Jul 17 20:59:15.650: INFO: PersistentVolumeClaim pvc-h5xtq found and phase=Bound (8.526568115s)
Jul 17 20:59:15.651: INFO: Waiting up to 3m0s for PersistentVolume local-mqxx5 to have phase Bound
Jul 17 20:59:15.757: INFO: PersistentVolume local-mqxx5 found and phase=Bound (106.335495ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nbsb
STEP: Creating a pod to test subpath
Jul 17 20:59:16.074: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nbsb" in namespace "provisioning-5086" to be "Succeeded or Failed"
Jul 17 20:59:16.178: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb": Phase="Pending", Reason="", readiness=false. Elapsed: 104.552148ms
Jul 17 20:59:18.283: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209635682s
Jul 17 20:59:20.388: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3139791s
Jul 17 20:59:22.492: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418504206s
Jul 17 20:59:24.597: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523226586s
Jul 17 20:59:26.703: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.629740911s
Jul 17 20:59:28.808: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.734579991s
STEP: Saw pod success
Jul 17 20:59:28.808: INFO: Pod "pod-subpath-test-preprovisionedpv-nbsb" satisfied condition "Succeeded or Failed"
Jul 17 20:59:28.912: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-nbsb container test-container-volume-preprovisionedpv-nbsb: <nil>
STEP: delete the pod
Jul 17 20:59:29.134: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nbsb to disappear
Jul 17 20:59:29.238: INFO: Pod pod-subpath-test-preprovisionedpv-nbsb no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nbsb
Jul 17 20:59:29.238: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nbsb" in namespace "provisioning-5086"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":85,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:30.898: INFO: Only supported for providers [gce gke] (not aws)
... skipping 129 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":11,"skipped":95,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:33.723: INFO: Only supported for providers [azure] (not aws)
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 20:59:24.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e" in namespace "downward-api-1172" to be "Succeeded or Failed"
Jul 17 20:59:24.697: INFO: Pod "downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e": Phase="Pending", Reason="", readiness=false. Elapsed: 102.697259ms
Jul 17 20:59:26.801: INFO: Pod "downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206371472s
Jul 17 20:59:28.904: INFO: Pod "downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309381521s
Jul 17 20:59:31.008: INFO: Pod "downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41319873s
Jul 17 20:59:33.111: INFO: Pod "downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.51629833s
STEP: Saw pod success
Jul 17 20:59:33.111: INFO: Pod "downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e" satisfied condition "Succeeded or Failed"
Jul 17 20:59:33.216: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e container client-container: <nil>
STEP: delete the pod
Jul 17 20:59:33.437: INFO: Waiting for pod downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e to disappear
Jul 17 20:59:33.539: INFO: Pod downwardapi-volume-12081c42-82fd-42e2-a8ea-f1dba04fe69e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.770 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":97,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] PodTemplates
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:34.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-5544" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":12,"skipped":99,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:35.235: INFO: Only supported for providers [openstack] (not aws)
... skipping 64 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":7,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:37.702: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 58 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1301
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":14,"skipped":119,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:59:12.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":15,"skipped":119,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:37.898: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-6820f9ff-d7f2-4a04-b66e-d6c54711b0da
STEP: Creating a pod to test consume configMaps
Jul 17 20:59:28.948: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3" in namespace "projected-6291" to be "Succeeded or Failed"
Jul 17 20:59:29.053: INFO: Pod "pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3": Phase="Pending", Reason="", readiness=false. Elapsed: 105.13841ms
Jul 17 20:59:31.158: INFO: Pod "pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209956359s
Jul 17 20:59:33.264: INFO: Pod "pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316492828s
Jul 17 20:59:35.394: INFO: Pod "pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446442251s
Jul 17 20:59:37.501: INFO: Pod "pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.553099009s
STEP: Saw pod success
Jul 17 20:59:37.501: INFO: Pod "pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3" satisfied condition "Succeeded or Failed"
Jul 17 20:59:37.606: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 20:59:37.822: INFO: Waiting for pod pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3 to disappear
Jul 17 20:59:37.930: INFO: Pod pod-projected-configmaps-49caa0af-31b4-4be1-98d7-9057257657d3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.935 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:38.152: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 116 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:316
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:338
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":8,"skipped":60,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:41.464: INFO: Only supported for providers [gce gke] (not aws)
... skipping 120 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":7,"skipped":71,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":84,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:59:02.796: INFO: >>> kubeConfig: /root/.kube/config
... skipping 4 lines ...
Jul 17 20:59:03.314: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Jul 17 20:59:04.040: INFO: Successfully created a new PD: "aws://eu-west-3a/vol-03390219ead39b136".
Jul 17 20:59:04.040: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-c876
STEP: Creating a pod to test exec-volume-test
Jul 17 20:59:04.144: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-c876" in namespace "volume-1306" to be "Succeeded or Failed"
Jul 17 20:59:04.247: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 102.079477ms
Jul 17 20:59:06.350: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205104602s
Jul 17 20:59:08.453: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308068201s
Jul 17 20:59:10.556: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411315761s
Jul 17 20:59:12.660: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51504985s
Jul 17 20:59:14.763: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 10.618405678s
Jul 17 20:59:16.867: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 12.722086786s
Jul 17 20:59:18.973: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 14.828269236s
Jul 17 20:59:21.076: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 16.931253554s
Jul 17 20:59:23.179: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Pending", Reason="", readiness=false. Elapsed: 19.034161132s
Jul 17 20:59:25.282: INFO: Pod "exec-volume-test-inlinevolume-c876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.13755072s
STEP: Saw pod success
Jul 17 20:59:25.282: INFO: Pod "exec-volume-test-inlinevolume-c876" satisfied condition "Succeeded or Failed"
Jul 17 20:59:25.384: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod exec-volume-test-inlinevolume-c876 container exec-container-inlinevolume-c876: <nil>
STEP: delete the pod
Jul 17 20:59:25.602: INFO: Waiting for pod exec-volume-test-inlinevolume-c876 to disappear
Jul 17 20:59:25.704: INFO: Pod exec-volume-test-inlinevolume-c876 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-c876
Jul 17 20:59:25.704: INFO: Deleting pod "exec-volume-test-inlinevolume-c876" in namespace "volume-1306"
Jul 17 20:59:26.046: INFO: Couldn't delete PD "aws://eu-west-3a/vol-03390219ead39b136", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03390219ead39b136 is currently attached to i-074f7d61809cdb109
	status code: 400, request id: 9d03b2fc-36a3-4e44-b724-e63806c3cdc1
Jul 17 20:59:31.629: INFO: Couldn't delete PD "aws://eu-west-3a/vol-03390219ead39b136", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03390219ead39b136 is currently attached to i-074f7d61809cdb109
	status code: 400, request id: e678e2aa-5795-44f4-8081-262b764473f1
Jul 17 20:59:37.303: INFO: Couldn't delete PD "aws://eu-west-3a/vol-03390219ead39b136", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-03390219ead39b136 is currently attached to i-074f7d61809cdb109
	status code: 400, request id: 554fae0b-50ff-484e-b3a5-8ff8184b8609
Jul 17 20:59:42.900: INFO: Successfully deleted PD "aws://eu-west-3a/vol-03390219ead39b136".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:42.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1306" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":10,"skipped":84,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Volume limits
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:43.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-limits-on-node-4461" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Volume limits should verify that all nodes have volume limits","total":-1,"completed":11,"skipped":88,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:44.094: INFO: Only supported for providers [azure] (not aws)
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox Pod with hostAliases
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":103,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:44.591: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 76 lines ...
• [SLOW TEST:30.199 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":6,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:47.148: INFO: Only supported for providers [vsphere] (not aws)
... skipping 23 lines ...
Jul 17 20:59:37.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Jul 17 20:59:38.379: INFO: Waiting up to 5m0s for pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8" in namespace "var-expansion-7235" to be "Succeeded or Failed"
Jul 17 20:59:38.484: INFO: Pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8": Phase="Pending", Reason="", readiness=false. Elapsed: 104.711243ms
Jul 17 20:59:40.590: INFO: Pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210392513s
Jul 17 20:59:42.700: INFO: Pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320349333s
Jul 17 20:59:44.805: INFO: Pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425697573s
Jul 17 20:59:46.912: INFO: Pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532932368s
Jul 17 20:59:49.019: INFO: Pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.639134484s
STEP: Saw pod success
Jul 17 20:59:49.019: INFO: Pod "var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8" satisfied condition "Succeeded or Failed"
Jul 17 20:59:49.124: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8 container dapi-container: <nil>
STEP: delete the pod
Jul 17 20:59:49.342: INFO: Waiting for pod var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8 to disappear
Jul 17 20:59:49.448: INFO: Pod var-expansion-c291a553-9188-4901-b0db-4b4f98fd89a8 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.910 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a volume subpath [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":8,"skipped":64,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:49.674: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":13,"skipped":106,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:59:26.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":14,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:50.160: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 71 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  With a server listening on localhost
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    should support forwarding over websockets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:490
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":11,"skipped":96,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 5 lines ...
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: Gathering metrics
W0717 20:54:50.674122   12294 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul 17 20:59:50.883: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:50.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2834" for this suite.


• [SLOW TEST:301.791 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:51.107: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 28 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:51.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-2437" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":12,"skipped":97,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":16,"skipped":101,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 20:59:52.116: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 23 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Jul 17 20:59:50.328: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4241" to be "Succeeded or Failed"
Jul 17 20:59:50.433: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 104.615192ms
Jul 17 20:59:52.537: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209325845s
STEP: Saw pod success
Jul 17 20:59:52.537: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul 17 20:59:52.642: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jul 17 20:59:52.860: INFO: Waiting for pod pod-host-path-test to disappear
Jul 17 20:59:52.964: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 20:59:52.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4241" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":9,"skipped":67,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Jul 17 20:59:44.361: INFO: PersistentVolumeClaim pvc-4ppbq found but phase is Pending instead of Bound.
Jul 17 20:59:46.465: INFO: PersistentVolumeClaim pvc-4ppbq found and phase=Bound (12.730456076s)
Jul 17 20:59:46.465: INFO: Waiting up to 3m0s for PersistentVolume local-58hhh to have phase Bound
Jul 17 20:59:46.573: INFO: PersistentVolume local-58hhh found and phase=Bound (107.677922ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-ghpt
STEP: Creating a pod to test subpath
Jul 17 20:59:46.886: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-ghpt" in namespace "provisioning-6031" to be "Succeeded or Failed"
Jul 17 20:59:46.990: INFO: Pod "pod-subpath-test-preprovisionedpv-ghpt": Phase="Pending", Reason="", readiness=false. Elapsed: 103.836324ms
Jul 17 20:59:49.115: INFO: Pod "pod-subpath-test-preprovisionedpv-ghpt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228939916s
Jul 17 20:59:51.220: INFO: Pod "pod-subpath-test-preprovisionedpv-ghpt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3332577s
Jul 17 20:59:53.325: INFO: Pod "pod-subpath-test-preprovisionedpv-ghpt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.438160408s
STEP: Saw pod success
Jul 17 20:59:53.325: INFO: Pod "pod-subpath-test-preprovisionedpv-ghpt" satisfied condition "Succeeded or Failed"
Jul 17 20:59:53.429: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-ghpt container test-container-volume-preprovisionedpv-ghpt: <nil>
STEP: delete the pod
Jul 17 20:59:53.649: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-ghpt to disappear
Jul 17 20:59:53.752: INFO: Pod pod-subpath-test-preprovisionedpv-ghpt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-ghpt
Jul 17 20:59:53.753: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-ghpt" in namespace "provisioning-6031"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":9,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 106 lines ...
Jul 17 20:59:45.057: INFO: PersistentVolumeClaim pvc-2k2t7 found and phase=Bound (6.420479781s)
Jul 17 20:59:45.057: INFO: Waiting up to 3m0s for PersistentVolume nfs-q2ztz to have phase Bound
Jul 17 20:59:45.162: INFO: PersistentVolume nfs-q2ztz found and phase=Bound (104.264121ms)
STEP: Checking pod has write access to PersistentVolume
Jul 17 20:59:45.370: INFO: Creating nfs test pod
Jul 17 20:59:45.475: INFO: Pod should terminate with exitcode 0 (success)
Jul 17 20:59:45.475: INFO: Waiting up to 5m0s for pod "pvc-tester-hl7t7" in namespace "pv-1275" to be "Succeeded or Failed"
Jul 17 20:59:45.579: INFO: Pod "pvc-tester-hl7t7": Phase="Pending", Reason="", readiness=false. Elapsed: 103.824797ms
Jul 17 20:59:47.684: INFO: Pod "pvc-tester-hl7t7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209552858s
Jul 17 20:59:49.790: INFO: Pod "pvc-tester-hl7t7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315003495s
Jul 17 20:59:51.894: INFO: Pod "pvc-tester-hl7t7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.419571308s
Jul 17 20:59:53.999: INFO: Pod "pvc-tester-hl7t7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.52396693s
STEP: Saw pod success
Jul 17 20:59:53.999: INFO: Pod "pvc-tester-hl7t7" satisfied condition "Succeeded or Failed"
Jul 17 20:59:53.999: INFO: Pod pvc-tester-hl7t7 succeeded 
Jul 17 20:59:53.999: INFO: Deleting pod "pvc-tester-hl7t7" in namespace "pv-1275"
Jul 17 20:59:54.107: INFO: Wait up to 5m0s for pod "pvc-tester-hl7t7" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul 17 20:59:54.212: INFO: Deleting PVC pvc-2k2t7 to trigger reclamation of PV 
Jul 17 20:59:54.212: INFO: Deleting PersistentVolumeClaim "pvc-2k2t7"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and non-pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:178
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","total":-1,"completed":15,"skipped":85,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 194 lines ...
Jul 17 21:00:00.112: INFO: Waiting for pod aws-client to disappear
Jul 17 21:00:00.215: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
STEP: Deleting pv and pvc
Jul 17 21:00:00.216: INFO: Deleting PersistentVolumeClaim "pvc-hzq98"
Jul 17 21:00:00.321: INFO: Deleting PersistentVolume "aws-5jhh6"
Jul 17 21:00:00.996: INFO: Couldn't delete PD "aws://eu-west-3a/vol-08e905e71079b8fb3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08e905e71079b8fb3 is currently attached to i-02ba71dc56c4adb77
	status code: 400, request id: d24a903c-06c1-4461-8d8b-329056586f51
Jul 17 21:00:06.570: INFO: Couldn't delete PD "aws://eu-west-3a/vol-08e905e71079b8fb3", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-08e905e71079b8fb3 is currently attached to i-02ba71dc56c4adb77
	status code: 400, request id: 6a8a6b85-3fd3-4fb2-8bde-bd87541358a6
Jul 17 21:00:12.128: INFO: Successfully deleted PD "aws://eu-west-3a/vol-08e905e71079b8fb3".
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:00:12.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-1864" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":6,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:12.353: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 163 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":7,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:15.122: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 104 lines ...
• [SLOW TEST:25.383 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":17,"skipped":107,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":17,"skipped":71,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 20:59:21.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 97 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1134
    CSIStorageCapacity unused
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1177
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused","total":-1,"completed":18,"skipped":71,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:19.567: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 224 lines ...
Jul 17 21:00:12.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul 17 21:00:13.126: INFO: Waiting up to 5m0s for pod "downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1" in namespace "downward-api-2384" to be "Succeeded or Failed"
Jul 17 21:00:13.230: INFO: Pod "downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1": Phase="Pending", Reason="", readiness=false. Elapsed: 103.900646ms
Jul 17 21:00:15.335: INFO: Pod "downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208140886s
Jul 17 21:00:17.446: INFO: Pod "downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319783578s
Jul 17 21:00:19.551: INFO: Pod "downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424765641s
STEP: Saw pod success
Jul 17 21:00:19.551: INFO: Pod "downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1" satisfied condition "Succeeded or Failed"
Jul 17 21:00:19.655: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1 container dapi-container: <nil>
STEP: delete the pod
Jul 17 21:00:19.869: INFO: Waiting for pod downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1 to disappear
Jul 17 21:00:19.972: INFO: Pod downward-api-7e82fb3b-4297-4d55-b96f-251df75349f1 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.680 seconds]
[sig-node] Downward API
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":44,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:20.194: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 194 lines ...
Jul 17 21:00:15.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul 17 21:00:15.801: INFO: Waiting up to 5m0s for pod "security-context-0e251b55-4120-486d-aa4b-c9b0321986b1" in namespace "security-context-8455" to be "Succeeded or Failed"
Jul 17 21:00:15.905: INFO: Pod "security-context-0e251b55-4120-486d-aa4b-c9b0321986b1": Phase="Pending", Reason="", readiness=false. Elapsed: 103.388844ms
Jul 17 21:00:18.009: INFO: Pod "security-context-0e251b55-4120-486d-aa4b-c9b0321986b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207654943s
Jul 17 21:00:20.113: INFO: Pod "security-context-0e251b55-4120-486d-aa4b-c9b0321986b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311201301s
STEP: Saw pod success
Jul 17 21:00:20.113: INFO: Pod "security-context-0e251b55-4120-486d-aa4b-c9b0321986b1" satisfied condition "Succeeded or Failed"
Jul 17 21:00:20.216: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod security-context-0e251b55-4120-486d-aa4b-c9b0321986b1 container test-container: <nil>
STEP: delete the pod
Jul 17 21:00:20.427: INFO: Waiting for pod security-context-0e251b55-4120-486d-aa4b-c9b0321986b1 to disappear
Jul 17 21:00:20.531: INFO: Pod security-context-0e251b55-4120-486d-aa4b-c9b0321986b1 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.561 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":8,"skipped":46,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:20.764: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 78 lines ...
Jul 17 20:59:38.691: INFO: PersistentVolumeClaim csi-hostpath8ffkf found but phase is Pending instead of Bound.
Jul 17 20:59:40.795: INFO: PersistentVolumeClaim csi-hostpath8ffkf found but phase is Pending instead of Bound.
Jul 17 20:59:42.901: INFO: PersistentVolumeClaim csi-hostpath8ffkf found but phase is Pending instead of Bound.
Jul 17 20:59:45.007: INFO: PersistentVolumeClaim csi-hostpath8ffkf found and phase=Bound (16.94263222s)
STEP: Creating pod pod-subpath-test-dynamicpv-ghq4
STEP: Creating a pod to test subpath
Jul 17 20:59:45.321: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-ghq4" in namespace "provisioning-7587" to be "Succeeded or Failed"
Jul 17 20:59:45.425: INFO: Pod "pod-subpath-test-dynamicpv-ghq4": Phase="Pending", Reason="", readiness=false. Elapsed: 103.707113ms
Jul 17 20:59:47.529: INFO: Pod "pod-subpath-test-dynamicpv-ghq4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208258584s
Jul 17 20:59:49.634: INFO: Pod "pod-subpath-test-dynamicpv-ghq4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313017623s
Jul 17 20:59:51.738: INFO: Pod "pod-subpath-test-dynamicpv-ghq4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416992941s
Jul 17 20:59:53.842: INFO: Pod "pod-subpath-test-dynamicpv-ghq4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5208765s
Jul 17 20:59:55.949: INFO: Pod "pod-subpath-test-dynamicpv-ghq4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.628244613s
Jul 17 20:59:58.054: INFO: Pod "pod-subpath-test-dynamicpv-ghq4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.733150788s
STEP: Saw pod success
Jul 17 20:59:58.054: INFO: Pod "pod-subpath-test-dynamicpv-ghq4" satisfied condition "Succeeded or Failed"
Jul 17 20:59:58.158: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-ghq4 container test-container-volume-dynamicpv-ghq4: <nil>
STEP: delete the pod
Jul 17 20:59:58.388: INFO: Waiting for pod pod-subpath-test-dynamicpv-ghq4 to disappear
Jul 17 20:59:58.491: INFO: Pod pod-subpath-test-dynamicpv-ghq4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-ghq4
Jul 17 20:59:58.491: INFO: Deleting pod "pod-subpath-test-dynamicpv-ghq4" in namespace "provisioning-7587"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":79,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:21.002: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
Jul 17 20:59:10.985: INFO: PersistentVolumeClaim pvc-m5gcj found and phase=Bound (102.500756ms)
STEP: Deleting the previously created pod
Jul 17 20:59:30.506: INFO: Deleting pod "pvc-volume-tester-77wdl" in namespace "csi-mock-volumes-942"
Jul 17 20:59:30.612: INFO: Wait up to 5m0s for pod "pvc-volume-tester-77wdl" to be fully deleted
STEP: Checking CSI driver logs
Jul 17 20:59:42.943: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6ImpiSE9CMWhKZFNxS09HTFRXbUJrNE1NcXI5cFl5OXhXc0V1ci1sM2d2UDAifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MjY1NTYxNjgsImlhdCI6MTYyNjU1NTU2OCwiaXNzIjoiaHR0cHM6Ly9hcGkuaW50ZXJuYWwuZTJlLTdkNDIwZTJiMjktMTY3ZDgudGVzdC1jbmNmLWF3cy5rOHMuaW8iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNzaS1tb2NrLXZvbHVtZXMtOTQyIiwicG9kIjp7Im5hbWUiOiJwdmMtdm9sdW1lLXRlc3Rlci03N3dkbCIsInVpZCI6IjQ5OWFiMDcxLTEzMzEtNGFmYi04NGVjLTc2ZTQxMDA1NmI3ZCJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjU5NmFjNjFiLTM1ZjEtNGI5MS1hNTExLTg2YmIzNWJhNTU4YiJ9fSwibmJmIjoxNjI2NTU1NTY4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6Y3NpLW1vY2stdm9sdW1lcy05NDI6ZGVmYXVsdCJ9.QLA-bqsZZHz8fE30Gh2fMRLdcM0rkXuPtruiRFIXb6Ra0PWlLhFCyZJYFCJfvI1Whs-D5fBbzuEzODPUESmf0ToLPSfAB3X9WyXsS9s6KQu_9biHldcqvS1oiO7Vi85BHhdVd_ETdNx6Ef_XpzOUsBqU3oLkZOgA-ewVyL6_QcXsylf5w537qrv1y7fV_GsXKlOIC6YAlzaISqwDgdiMhQLOjbrg3RcISWCiFxA4xrUMmnrK5ENxQV2xKOfB2Q50gAvPHe9HyGjdZ3u8-R6QSpp0YvYVh3xyEat83SLSE2d9v3dKLohEYldQNQ7Q-d8qI_HbeznyPW-3hamuxpOOcA","expirationTimestamp":"2021-07-17T21:09:28Z"}}
Jul 17 20:59:42.943: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/499ab071-1331-4afb-84ec-76e410056b7d/volumes/kubernetes.io~csi/pvc-8d570cae-f875-46e4-9f28-9ef58aafaedb/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-77wdl
Jul 17 20:59:42.943: INFO: Deleting pod "pvc-volume-tester-77wdl" in namespace "csi-mock-volumes-942"
STEP: Deleting claim pvc-m5gcj
Jul 17 20:59:43.265: INFO: Waiting up to 2m0s for PersistentVolume pvc-8d570cae-f875-46e4-9f28-9ef58aafaedb to get deleted
Jul 17 20:59:43.368: INFO: PersistentVolume pvc-8d570cae-f875-46e4-9f28-9ef58aafaedb was removed
STEP: Deleting storageclass csi-mock-volumes-942-schmjqt
... skipping 52 lines ...
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:00:21.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul 17 21:00:21.717: INFO: found topology map[topology.kubernetes.io/zone:eu-west-3a]
Jul 17 21:00:21.717: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul 17 21:00:21.717: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 5 lines ...
Jul 17 21:00:05.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 17 21:00:05.910: INFO: Waiting up to 5m0s for pod "pod-e811c618-8cb6-45cb-9903-689209367b8e" in namespace "emptydir-7474" to be "Succeeded or Failed"
Jul 17 21:00:06.014: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 104.144535ms
Jul 17 21:00:08.120: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210600315s
Jul 17 21:00:10.225: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31536559s
Jul 17 21:00:12.330: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420453944s
Jul 17 21:00:14.436: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.525910871s
Jul 17 21:00:16.541: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.631642183s
Jul 17 21:00:18.647: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.737049113s
Jul 17 21:00:20.753: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.842760685s
Jul 17 21:00:22.857: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.947337386s
STEP: Saw pod success
Jul 17 21:00:22.857: INFO: Pod "pod-e811c618-8cb6-45cb-9903-689209367b8e" satisfied condition "Succeeded or Failed"
Jul 17 21:00:22.962: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-e811c618-8cb6-45cb-9903-689209367b8e container test-container: <nil>
STEP: delete the pod
Jul 17 21:00:23.177: INFO: Waiting for pod pod-e811c618-8cb6-45cb-9903-689209367b8e to disappear
Jul 17 21:00:23.281: INFO: Pod pod-e811c618-8cb6-45cb-9903-689209367b8e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:18.213 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":86,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:23.524: INFO: Only supported for providers [gce gke] (not aws)
... skipping 97 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 179 lines ...
• [SLOW TEST:48.754 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":16,"skipped":123,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:26.688: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 85 lines ...
Jul 17 21:00:13.896: INFO: PersistentVolumeClaim pvc-pmj7v found but phase is Pending instead of Bound.
Jul 17 21:00:16.003: INFO: PersistentVolumeClaim pvc-pmj7v found and phase=Bound (14.839679535s)
Jul 17 21:00:16.004: INFO: Waiting up to 3m0s for PersistentVolume local-g67l7 to have phase Bound
Jul 17 21:00:16.107: INFO: PersistentVolume local-g67l7 found and phase=Bound (103.644201ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-w62w
STEP: Creating a pod to test subpath
Jul 17 21:00:16.424: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-w62w" in namespace "provisioning-6317" to be "Succeeded or Failed"
Jul 17 21:00:16.528: INFO: Pod "pod-subpath-test-preprovisionedpv-w62w": Phase="Pending", Reason="", readiness=false. Elapsed: 103.701055ms
Jul 17 21:00:18.634: INFO: Pod "pod-subpath-test-preprovisionedpv-w62w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209295108s
Jul 17 21:00:20.738: INFO: Pod "pod-subpath-test-preprovisionedpv-w62w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313611656s
Jul 17 21:00:22.842: INFO: Pod "pod-subpath-test-preprovisionedpv-w62w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417901862s
Jul 17 21:00:24.948: INFO: Pod "pod-subpath-test-preprovisionedpv-w62w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.523306208s
STEP: Saw pod success
Jul 17 21:00:24.948: INFO: Pod "pod-subpath-test-preprovisionedpv-w62w" satisfied condition "Succeeded or Failed"
Jul 17 21:00:25.052: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-w62w container test-container-subpath-preprovisionedpv-w62w: <nil>
STEP: delete the pod
Jul 17 21:00:25.267: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-w62w to disappear
Jul 17 21:00:25.371: INFO: Pod pod-subpath-test-preprovisionedpv-w62w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-w62w
Jul 17 21:00:25.371: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-w62w" in namespace "provisioning-6317"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":33,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:00:28.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-418" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":17,"skipped":131,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:28.330: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 21 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul 17 21:00:09.494: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a" in namespace "security-context-test-2728" to be "Succeeded or Failed"
Jul 17 21:00:09.597: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 102.314874ms
Jul 17 21:00:11.701: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207108154s
Jul 17 21:00:13.805: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310840132s
Jul 17 21:00:15.912: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417529653s
Jul 17 21:00:18.017: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.522588912s
Jul 17 21:00:20.121: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.626528581s
Jul 17 21:00:22.224: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.729554291s
Jul 17 21:00:24.328: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.833264186s
Jul 17 21:00:26.434: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.940078011s
Jul 17 21:00:28.538: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.043983581s
Jul 17 21:00:28.538: INFO: Pod "busybox-user-65534-fa775d31-47da-41de-9c46-232af01bbb8a" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:00:28.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2728" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsUser
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":101,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:28.775: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:00:29.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-9416" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":14,"skipped":120,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 54 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":75,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:00:26.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-839aeecd-23e6-45da-83b6-08a7622debb0
STEP: Creating a pod to test consume secrets
Jul 17 21:00:27.623: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931" in namespace "projected-7303" to be "Succeeded or Failed"
Jul 17 21:00:27.727: INFO: Pod "pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931": Phase="Pending", Reason="", readiness=false. Elapsed: 104.415288ms
Jul 17 21:00:29.832: INFO: Pod "pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209494289s
Jul 17 21:00:31.937: INFO: Pod "pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.314528136s
STEP: Saw pod success
Jul 17 21:00:31.937: INFO: Pod "pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931" satisfied condition "Succeeded or Failed"
Jul 17 21:00:32.042: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931 container secret-volume-test: <nil>
STEP: delete the pod
Jul 17 21:00:32.257: INFO: Waiting for pod pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931 to disappear
Jul 17 21:00:32.361: INFO: Pod pod-projected-secrets-571ab7e9-31a7-44a0-bf19-4d711c5d7931 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.685 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":34,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:32.614: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 229 lines ...
Jul 17 20:59:58.151: INFO: PersistentVolumeClaim pvc-rxrc2 found but phase is Pending instead of Bound.
Jul 17 21:00:00.255: INFO: PersistentVolumeClaim pvc-rxrc2 found but phase is Pending instead of Bound.
Jul 17 21:00:02.359: INFO: PersistentVolumeClaim pvc-rxrc2 found but phase is Pending instead of Bound.
Jul 17 21:00:04.464: INFO: PersistentVolumeClaim pvc-rxrc2 found but phase is Pending instead of Bound.
Jul 17 21:00:06.569: INFO: PersistentVolumeClaim pvc-rxrc2 found but phase is Pending instead of Bound.
Jul 17 21:00:08.674: INFO: PersistentVolumeClaim pvc-rxrc2 found but phase is Pending instead of Bound.
Jul 17 21:00:10.675: FAIL: Failed waiting for PVC to be bound: PersistentVolumeClaims [pvc-rxrc2] not all in phase Bound within 5m0s
Unexpected error:
    <*errors.errorString | 0xc0030ba4b0>: {
        s: "PersistentVolumeClaims [pvc-rxrc2] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [pvc-rxrc2] not all in phase Bound within 5m0s
occurred

... skipping 61 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=nil [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493

    Jul 17 21:00:10.675: Failed waiting for PVC to be bound: PersistentVolumeClaims [pvc-rxrc2] not all in phase Bound within 5m0s
    Unexpected error:
        <*errors.errorString | 0xc0030ba4b0>: {
            s: "PersistentVolumeClaims [pvc-rxrc2] not all in phase Bound within 5m0s",
        }
        PersistentVolumeClaims [pvc-rxrc2] not all in phase Bound within 5m0s
    occurred

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1794
------------------------------
S
------------------------------
{"msg":"FAILED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":2,"skipped":38,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:32.650: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 89 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-9267/configmap-test-cc90ca57-f87d-4416-a1e2-56671e8e2078
STEP: Creating a pod to test consume configMaps
Jul 17 21:00:33.499: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fd66038-becf-469f-bb55-86982fd5f49d" in namespace "configmap-9267" to be "Succeeded or Failed"
Jul 17 21:00:33.604: INFO: Pod "pod-configmaps-1fd66038-becf-469f-bb55-86982fd5f49d": Phase="Pending", Reason="", readiness=false. Elapsed: 104.674577ms
Jul 17 21:00:35.709: INFO: Pod "pod-configmaps-1fd66038-becf-469f-bb55-86982fd5f49d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20909564s
STEP: Saw pod success
Jul 17 21:00:35.709: INFO: Pod "pod-configmaps-1fd66038-becf-469f-bb55-86982fd5f49d" satisfied condition "Succeeded or Failed"
Jul 17 21:00:35.813: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-configmaps-1fd66038-becf-469f-bb55-86982fd5f49d container env-test: <nil>
STEP: delete the pod
Jul 17 21:00:36.029: INFO: Waiting for pod pod-configmaps-1fd66038-becf-469f-bb55-86982fd5f49d to disappear
Jul 17 21:00:36.134: INFO: Pod pod-configmaps-1fd66038-becf-469f-bb55-86982fd5f49d no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:00:36.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9267" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":47,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:00:36.375: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27384 lines ...
• [SLOW TEST:8.194 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":23,"skipped":166,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:07:55.180: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 92 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support inline execution and attach
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:545
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":22,"skipped":159,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:07:55.596: INFO: Only supported for providers [vsphere] (not aws)
... skipping 37 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":14,"skipped":114,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:02:15.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 2 lines ...
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0717 21:02:56.717004   12374 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul 17 21:07:56.921: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul 17 21:07:56.921: INFO: Deleting pod "simpletest.rc-2n56b" in namespace "gc-6553"
Jul 17 21:07:57.049: INFO: Deleting pod "simpletest.rc-56slz" in namespace "gc-6553"
Jul 17 21:07:57.162: INFO: Deleting pod "simpletest.rc-6d75q" in namespace "gc-6553"
Jul 17 21:07:57.278: INFO: Deleting pod "simpletest.rc-bcxsg" in namespace "gc-6553"
Jul 17 21:07:57.391: INFO: Deleting pod "simpletest.rc-bs8gh" in namespace "gc-6553"
Jul 17 21:07:57.499: INFO: Deleting pod "simpletest.rc-fg8ct" in namespace "gc-6553"
... skipping 10 lines ...
• [SLOW TEST:342.787 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":15,"skipped":114,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:07:58.258: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 62 lines ...
• [SLOW TEST:18.573 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":26,"skipped":130,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:00.424: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 85 lines ...
• [SLOW TEST:12.473 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":15,"skipped":93,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:01.880: INFO: Only supported for providers [vsphere] (not aws)
... skipping 68 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-43000214-d00b-4bac-9444-27d204fa7ca4
STEP: Creating a pod to test consume secrets
Jul 17 21:07:55.921: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17" in namespace "projected-3595" to be "Succeeded or Failed"
Jul 17 21:07:56.025: INFO: Pod "pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17": Phase="Pending", Reason="", readiness=false. Elapsed: 104.05919ms
Jul 17 21:07:58.131: INFO: Pod "pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21042766s
Jul 17 21:08:00.241: INFO: Pod "pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320030357s
Jul 17 21:08:02.345: INFO: Pod "pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.424329382s
STEP: Saw pod success
Jul 17 21:08:02.345: INFO: Pod "pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17" satisfied condition "Succeeded or Failed"
Jul 17 21:08:02.449: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul 17 21:08:02.663: INFO: Waiting for pod pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17 to disappear
Jul 17 21:08:02.767: INFO: Pod pod-projected-secrets-8f60b948-ddc8-4ee1-9af5-e23a21801f17 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.787 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":168,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:03.001: INFO: Driver aws doesn't publish storage capacity -- skipping
... skipping 69 lines ...
Jul 17 21:07:46.234: INFO: PersistentVolumeClaim pvc-prj4g found and phase=Bound (12.781205006s)
Jul 17 21:07:46.234: INFO: Waiting up to 3m0s for PersistentVolume nfs-62c94 to have phase Bound
Jul 17 21:07:46.338: INFO: PersistentVolume nfs-62c94 found and phase=Bound (104.301888ms)
STEP: Checking pod has write access to PersistentVolume
Jul 17 21:07:46.545: INFO: Creating nfs test pod
Jul 17 21:07:46.650: INFO: Pod should terminate with exitcode 0 (success)
Jul 17 21:07:46.650: INFO: Waiting up to 5m0s for pod "pvc-tester-f9kgd" in namespace "pv-8043" to be "Succeeded or Failed"
Jul 17 21:07:46.754: INFO: Pod "pvc-tester-f9kgd": Phase="Pending", Reason="", readiness=false. Elapsed: 103.653475ms
Jul 17 21:07:48.859: INFO: Pod "pvc-tester-f9kgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208550704s
Jul 17 21:07:50.963: INFO: Pod "pvc-tester-f9kgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312874082s
Jul 17 21:07:53.068: INFO: Pod "pvc-tester-f9kgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418064243s
STEP: Saw pod success
Jul 17 21:07:53.068: INFO: Pod "pvc-tester-f9kgd" satisfied condition "Succeeded or Failed"
Jul 17 21:07:53.068: INFO: Pod pvc-tester-f9kgd succeeded 
Jul 17 21:07:53.068: INFO: Deleting pod "pvc-tester-f9kgd" in namespace "pv-8043"
Jul 17 21:07:53.176: INFO: Wait up to 5m0s for pod "pvc-tester-f9kgd" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul 17 21:07:53.279: INFO: Deleting PVC pvc-prj4g to trigger reclamation of PV 
Jul 17 21:07:53.279: INFO: Deleting PersistentVolumeClaim "pvc-prj4g"
... skipping 23 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":29,"skipped":233,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":33,"skipped":176,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:07:54.480: INFO: >>> kubeConfig: /root/.kube/config
... skipping 2 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Jul 17 21:07:54.997: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul 17 21:07:54.997: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fbfx
STEP: Creating a pod to test subpath
Jul 17 21:07:55.105: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fbfx" in namespace "provisioning-6596" to be "Succeeded or Failed"
Jul 17 21:07:55.208: INFO: Pod "pod-subpath-test-inlinevolume-fbfx": Phase="Pending", Reason="", readiness=false. Elapsed: 102.548878ms
Jul 17 21:07:57.315: INFO: Pod "pod-subpath-test-inlinevolume-fbfx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210324562s
Jul 17 21:07:59.422: INFO: Pod "pod-subpath-test-inlinevolume-fbfx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317248296s
Jul 17 21:08:01.525: INFO: Pod "pod-subpath-test-inlinevolume-fbfx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420056155s
Jul 17 21:08:03.629: INFO: Pod "pod-subpath-test-inlinevolume-fbfx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.52387196s
STEP: Saw pod success
Jul 17 21:08:03.629: INFO: Pod "pod-subpath-test-inlinevolume-fbfx" satisfied condition "Succeeded or Failed"
Jul 17 21:08:03.732: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-fbfx container test-container-volume-inlinevolume-fbfx: <nil>
STEP: delete the pod
Jul 17 21:08:03.952: INFO: Waiting for pod pod-subpath-test-inlinevolume-fbfx to disappear
Jul 17 21:08:04.054: INFO: Pod pod-subpath-test-inlinevolume-fbfx no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fbfx
Jul 17 21:08:04.055: INFO: Deleting pod "pod-subpath-test-inlinevolume-fbfx" in namespace "provisioning-6596"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":34,"skipped":176,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:04.499: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 135 lines ...
Jul 17 21:07:25.029: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-n42ng] to have phase Bound
Jul 17 21:07:25.134: INFO: PersistentVolumeClaim pvc-n42ng found and phase=Bound (104.342268ms)
STEP: Deleting the previously created pod
Jul 17 21:07:35.667: INFO: Deleting pod "pvc-volume-tester-g5fcf" in namespace "csi-mock-volumes-7378"
Jul 17 21:07:35.826: INFO: Wait up to 5m0s for pod "pvc-volume-tester-g5fcf" to be fully deleted
STEP: Checking CSI driver logs
Jul 17 21:07:42.141: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/a0a36ef7-eece-4a22-8762-57c377a85975/volumes/kubernetes.io~csi/pvc-1aaf6819-0a2b-4484-abea-2279be5eea04/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-g5fcf
Jul 17 21:07:42.141: INFO: Deleting pod "pvc-volume-tester-g5fcf" in namespace "csi-mock-volumes-7378"
STEP: Deleting claim pvc-n42ng
Jul 17 21:07:42.456: INFO: Waiting up to 2m0s for PersistentVolume pvc-1aaf6819-0a2b-4484-abea-2279be5eea04 to get deleted
Jul 17 21:07:42.560: INFO: PersistentVolume pvc-1aaf6819-0a2b-4484-abea-2279be5eea04 was removed
STEP: Deleting storageclass csi-mock-volumes-7378-sch7hlt
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:443
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:493
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":17,"skipped":117,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:04.628: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 19 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-3709/secret-test-60f6e122-f9e8-4f53-ab4e-f42e3e205b33
STEP: Creating a pod to test consume secrets
Jul 17 21:08:02.678: INFO: Waiting up to 5m0s for pod "pod-configmaps-10de8381-7459-49a2-ad72-553151abd6cf" in namespace "secrets-3709" to be "Succeeded or Failed"
Jul 17 21:08:02.781: INFO: Pod "pod-configmaps-10de8381-7459-49a2-ad72-553151abd6cf": Phase="Pending", Reason="", readiness=false. Elapsed: 103.14349ms
Jul 17 21:08:04.887: INFO: Pod "pod-configmaps-10de8381-7459-49a2-ad72-553151abd6cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208703862s
STEP: Saw pod success
Jul 17 21:08:04.887: INFO: Pod "pod-configmaps-10de8381-7459-49a2-ad72-553151abd6cf" satisfied condition "Succeeded or Failed"
Jul 17 21:08:04.991: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-configmaps-10de8381-7459-49a2-ad72-553151abd6cf container env-test: <nil>
STEP: delete the pod
Jul 17 21:08:05.205: INFO: Waiting for pod pod-configmaps-10de8381-7459-49a2-ad72-553151abd6cf to disappear
Jul 17 21:08:05.308: INFO: Pod pod-configmaps-10de8381-7459-49a2-ad72-553151abd6cf no longer exists
[AfterEach] [sig-node] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:08:05.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3709" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":106,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
Jul 17 21:08:06.475: INFO: AfterEach: Cleaning up test resources.
Jul 17 21:08:06.475: INFO: Deleting PersistentVolumeClaim "pvc-cpl5d"
Jul 17 21:08:06.582: INFO: Deleting PersistentVolume "hostpath-dchs7"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":18,"skipped":118,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:06.718: INFO: Only supported for providers [gce gke] (not aws)
... skipping 164 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:173
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume","total":-1,"completed":20,"skipped":110,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:09.629: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 163 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":20,"skipped":133,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:11.156: INFO: Driver "local" does not provide raw block - skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":30,"skipped":215,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:07:45.704: INFO: >>> kubeConfig: /root/.kube/config
... skipping 4 lines ...
Jul 17 21:07:46.226: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Jul 17 21:07:46.923: INFO: Successfully created a new PD: "aws://eu-west-3a/vol-00e4b2ad0647dc4ec".
Jul 17 21:07:46.923: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-p9r6
STEP: Creating a pod to test exec-volume-test
Jul 17 21:07:47.031: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-p9r6" in namespace "volume-8986" to be "Succeeded or Failed"
Jul 17 21:07:47.135: INFO: Pod "exec-volume-test-inlinevolume-p9r6": Phase="Pending", Reason="", readiness=false. Elapsed: 103.639001ms
Jul 17 21:07:49.239: INFO: Pod "exec-volume-test-inlinevolume-p9r6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207479587s
Jul 17 21:07:51.344: INFO: Pod "exec-volume-test-inlinevolume-p9r6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313070897s
Jul 17 21:07:53.449: INFO: Pod "exec-volume-test-inlinevolume-p9r6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418278539s
Jul 17 21:07:55.553: INFO: Pod "exec-volume-test-inlinevolume-p9r6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52241984s
Jul 17 21:07:57.658: INFO: Pod "exec-volume-test-inlinevolume-p9r6": Phase="Running", Reason="", readiness=true. Elapsed: 10.626704s
Jul 17 21:07:59.762: INFO: Pod "exec-volume-test-inlinevolume-p9r6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.730658261s
STEP: Saw pod success
Jul 17 21:07:59.762: INFO: Pod "exec-volume-test-inlinevolume-p9r6" satisfied condition "Succeeded or Failed"
Jul 17 21:07:59.865: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod exec-volume-test-inlinevolume-p9r6 container exec-container-inlinevolume-p9r6: <nil>
STEP: delete the pod
Jul 17 21:08:00.080: INFO: Waiting for pod exec-volume-test-inlinevolume-p9r6 to disappear
Jul 17 21:08:00.187: INFO: Pod exec-volume-test-inlinevolume-p9r6 no longer exists
STEP: Deleting pod exec-volume-test-inlinevolume-p9r6
Jul 17 21:08:00.187: INFO: Deleting pod "exec-volume-test-inlinevolume-p9r6" in namespace "volume-8986"
Jul 17 21:08:00.483: INFO: Couldn't delete PD "aws://eu-west-3a/vol-00e4b2ad0647dc4ec", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-00e4b2ad0647dc4ec is currently attached to i-074f7d61809cdb109
	status code: 400, request id: d86db285-bae7-4444-b105-e9359c0ce90f
Jul 17 21:08:06.064: INFO: Couldn't delete PD "aws://eu-west-3a/vol-00e4b2ad0647dc4ec", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-00e4b2ad0647dc4ec is currently attached to i-074f7d61809cdb109
	status code: 400, request id: 6b24fd69-8232-44d7-a93d-13477da825fd
Jul 17 21:08:11.657: INFO: Successfully deleted PD "aws://eu-west-3a/vol-00e4b2ad0647dc4ec".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:08:11.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-8986" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":31,"skipped":215,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:11.885: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 276 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:561
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:590
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":20,"skipped":225,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil"]}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:17.035: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 121 lines ...
• [SLOW TEST:6.070 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":21,"skipped":137,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:17.261: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 46 lines ...
Jul 17 21:08:06.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul 17 21:08:07.306: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 17 21:08:07.556: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9062" in namespace "provisioning-9062" to be "Succeeded or Failed"
Jul 17 21:08:07.660: INFO: Pod "hostpath-symlink-prep-provisioning-9062": Phase="Pending", Reason="", readiness=false. Elapsed: 104.154612ms
Jul 17 21:08:09.767: INFO: Pod "hostpath-symlink-prep-provisioning-9062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.210888569s
STEP: Saw pod success
Jul 17 21:08:09.767: INFO: Pod "hostpath-symlink-prep-provisioning-9062" satisfied condition "Succeeded or Failed"
Jul 17 21:08:09.767: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9062" in namespace "provisioning-9062"
Jul 17 21:08:09.884: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9062" to be fully deleted
Jul 17 21:08:09.988: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-gddt
STEP: Creating a pod to test subpath
Jul 17 21:08:10.096: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-gddt" in namespace "provisioning-9062" to be "Succeeded or Failed"
Jul 17 21:08:10.200: INFO: Pod "pod-subpath-test-inlinevolume-gddt": Phase="Pending", Reason="", readiness=false. Elapsed: 104.769768ms
Jul 17 21:08:12.305: INFO: Pod "pod-subpath-test-inlinevolume-gddt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209518457s
Jul 17 21:08:14.411: INFO: Pod "pod-subpath-test-inlinevolume-gddt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.315101378s
STEP: Saw pod success
Jul 17 21:08:14.411: INFO: Pod "pod-subpath-test-inlinevolume-gddt" satisfied condition "Succeeded or Failed"
Jul 17 21:08:14.516: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-gddt container test-container-volume-inlinevolume-gddt: <nil>
STEP: delete the pod
Jul 17 21:08:14.734: INFO: Waiting for pod pod-subpath-test-inlinevolume-gddt to disappear
Jul 17 21:08:14.838: INFO: Pod pod-subpath-test-inlinevolume-gddt no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-gddt
Jul 17 21:08:14.838: INFO: Deleting pod "pod-subpath-test-inlinevolume-gddt" in namespace "provisioning-9062"
STEP: Deleting pod
Jul 17 21:08:14.942: INFO: Deleting pod "pod-subpath-test-inlinevolume-gddt" in namespace "provisioning-9062"
Jul 17 21:08:15.152: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9062" in namespace "provisioning-9062" to be "Succeeded or Failed"
Jul 17 21:08:15.257: INFO: Pod "hostpath-symlink-prep-provisioning-9062": Phase="Pending", Reason="", readiness=false. Elapsed: 104.609926ms
Jul 17 21:08:17.361: INFO: Pod "hostpath-symlink-prep-provisioning-9062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209561621s
STEP: Saw pod success
Jul 17 21:08:17.362: INFO: Pod "hostpath-symlink-prep-provisioning-9062" satisfied condition "Succeeded or Failed"
Jul 17 21:08:17.362: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9062" in namespace "provisioning-9062"
Jul 17 21:08:17.470: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9062" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:08:17.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9062" for this suite.
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:08:18.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-193" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":22,"skipped":144,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:18.251: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
Jul 17 21:08:12.860: INFO: PersistentVolumeClaim pvc-f9gbn found but phase is Pending instead of Bound.
Jul 17 21:08:14.966: INFO: PersistentVolumeClaim pvc-f9gbn found and phase=Bound (12.737873784s)
Jul 17 21:08:14.966: INFO: Waiting up to 3m0s for PersistentVolume local-cd659 to have phase Bound
Jul 17 21:08:15.078: INFO: PersistentVolume local-cd659 found and phase=Bound (111.935619ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-64pc
STEP: Creating a pod to test exec-volume-test
Jul 17 21:08:15.389: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-64pc" in namespace "volume-1398" to be "Succeeded or Failed"
Jul 17 21:08:15.497: INFO: Pod "exec-volume-test-preprovisionedpv-64pc": Phase="Pending", Reason="", readiness=false. Elapsed: 107.104382ms
Jul 17 21:08:17.612: INFO: Pod "exec-volume-test-preprovisionedpv-64pc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222225574s
STEP: Saw pod success
Jul 17 21:08:17.612: INFO: Pod "exec-volume-test-preprovisionedpv-64pc" satisfied condition "Succeeded or Failed"
Jul 17 21:08:17.715: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-64pc container exec-container-preprovisionedpv-64pc: <nil>
STEP: delete the pod
Jul 17 21:08:17.927: INFO: Waiting for pod exec-volume-test-preprovisionedpv-64pc to disappear
Jul 17 21:08:18.031: INFO: Pod exec-volume-test-preprovisionedpv-64pc no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-64pc
Jul 17 21:08:18.031: INFO: Deleting pod "exec-volume-test-preprovisionedpv-64pc" in namespace "volume-1398"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":24,"skipped":145,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:19.448: INFO: Only supported for providers [gce gke] (not aws)
... skipping 145 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":26,"skipped":141,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:20.242: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 52 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1027
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":17,"skipped":108,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:6.886 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should allow pods to hairpin back to themselves through services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:986
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":39,"skipped":226,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 134 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394

      Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":27,"skipped":217,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:00.928: INFO: >>> kubeConfig: /root/.kube/config
... skipping 16 lines ...
Jul 17 21:08:13.415: INFO: PersistentVolumeClaim pvc-tfwjw found but phase is Pending instead of Bound.
Jul 17 21:08:15.519: INFO: PersistentVolumeClaim pvc-tfwjw found and phase=Bound (10.63697398s)
Jul 17 21:08:15.519: INFO: Waiting up to 3m0s for PersistentVolume local-zsl6x to have phase Bound
Jul 17 21:08:15.623: INFO: PersistentVolume local-zsl6x found and phase=Bound (103.8074ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-7jg2
STEP: Creating a pod to test subpath
Jul 17 21:08:15.936: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7jg2" in namespace "provisioning-4143" to be "Succeeded or Failed"
Jul 17 21:08:16.039: INFO: Pod "pod-subpath-test-preprovisionedpv-7jg2": Phase="Pending", Reason="", readiness=false. Elapsed: 103.652991ms
Jul 17 21:08:18.144: INFO: Pod "pod-subpath-test-preprovisionedpv-7jg2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208342806s
Jul 17 21:08:20.248: INFO: Pod "pod-subpath-test-preprovisionedpv-7jg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312454936s
STEP: Saw pod success
Jul 17 21:08:20.248: INFO: Pod "pod-subpath-test-preprovisionedpv-7jg2" satisfied condition "Succeeded or Failed"
Jul 17 21:08:20.352: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-7jg2 container test-container-subpath-preprovisionedpv-7jg2: <nil>
STEP: delete the pod
Jul 17 21:08:20.576: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7jg2 to disappear
Jul 17 21:08:20.685: INFO: Pod pod-subpath-test-preprovisionedpv-7jg2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7jg2
Jul 17 21:08:20.685: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7jg2" in namespace "provisioning-4143"
STEP: Creating pod pod-subpath-test-preprovisionedpv-7jg2
STEP: Creating a pod to test subpath
Jul 17 21:08:20.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-7jg2" in namespace "provisioning-4143" to be "Succeeded or Failed"
Jul 17 21:08:21.014: INFO: Pod "pod-subpath-test-preprovisionedpv-7jg2": Phase="Pending", Reason="", readiness=false. Elapsed: 120.709918ms
Jul 17 21:08:23.119: INFO: Pod "pod-subpath-test-preprovisionedpv-7jg2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.225808846s
STEP: Saw pod success
Jul 17 21:08:23.119: INFO: Pod "pod-subpath-test-preprovisionedpv-7jg2" satisfied condition "Succeeded or Failed"
Jul 17 21:08:23.226: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-7jg2 container test-container-subpath-preprovisionedpv-7jg2: <nil>
STEP: delete the pod
Jul 17 21:08:23.451: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-7jg2 to disappear
Jul 17 21:08:23.555: INFO: Pod pod-subpath-test-preprovisionedpv-7jg2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-7jg2
Jul 17 21:08:23.555: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-7jg2" in namespace "provisioning-4143"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":13,"skipped":94,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:06:31.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":14,"skipped":94,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:26.837: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 59 lines ...
• [SLOW TEST:29.562 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":16,"skipped":128,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:26.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-b0ef4397-3b61-43a2-a90f-957eea8e7b5b
STEP: Creating a pod to test consume configMaps
Jul 17 21:08:27.594: INFO: Waiting up to 5m0s for pod "pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946" in namespace "configmap-51" to be "Succeeded or Failed"
Jul 17 21:08:27.698: INFO: Pod "pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946": Phase="Pending", Reason="", readiness=false. Elapsed: 103.126195ms
Jul 17 21:08:29.806: INFO: Pod "pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211942039s
Jul 17 21:08:31.911: INFO: Pod "pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316374158s
STEP: Saw pod success
Jul 17 21:08:31.911: INFO: Pod "pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946" satisfied condition "Succeeded or Failed"
Jul 17 21:08:32.015: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 21:08:32.229: INFO: Waiting for pod pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946 to disappear
Jul 17 21:08:32.332: INFO: Pod pod-configmaps-82c57824-3a1b-448f-9ea7-5db4d2d74946 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.679 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":97,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:12.475 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":27,"skipped":143,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:32.742: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":23,"skipped":155,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:22.024: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Jul 17 21:08:28.069: INFO: PersistentVolumeClaim pvc-xfrqw found but phase is Pending instead of Bound.
Jul 17 21:08:30.176: INFO: PersistentVolumeClaim pvc-xfrqw found and phase=Bound (4.312631782s)
Jul 17 21:08:30.176: INFO: Waiting up to 3m0s for PersistentVolume local-x5j2w to have phase Bound
Jul 17 21:08:30.278: INFO: PersistentVolume local-x5j2w found and phase=Bound (102.093016ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-4pmj
STEP: Creating a pod to test exec-volume-test
Jul 17 21:08:30.587: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-4pmj" in namespace "volume-5070" to be "Succeeded or Failed"
Jul 17 21:08:30.690: INFO: Pod "exec-volume-test-preprovisionedpv-4pmj": Phase="Pending", Reason="", readiness=false. Elapsed: 102.747539ms
Jul 17 21:08:32.794: INFO: Pod "exec-volume-test-preprovisionedpv-4pmj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206791755s
STEP: Saw pod success
Jul 17 21:08:32.794: INFO: Pod "exec-volume-test-preprovisionedpv-4pmj" satisfied condition "Succeeded or Failed"
Jul 17 21:08:32.896: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-4pmj container exec-container-preprovisionedpv-4pmj: <nil>
STEP: delete the pod
Jul 17 21:08:33.106: INFO: Waiting for pod exec-volume-test-preprovisionedpv-4pmj to disappear
Jul 17 21:08:33.208: INFO: Pod exec-volume-test-preprovisionedpv-4pmj no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-4pmj
Jul 17 21:08:33.208: INFO: Deleting pod "exec-volume-test-preprovisionedpv-4pmj" in namespace "volume-5070"
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":24,"skipped":155,"failed":0}
[BeforeEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jul 17 21:08:34.563: INFO: Only supported for node OS distro [windows] (not debian)
[AfterEach] [sig-windows] Hybrid cluster network
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":25,"skipped":151,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:34.979: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 165 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:982
    should create/apply a valid CR for CRD with validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1001
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema","total":-1,"completed":40,"skipped":243,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":18,"skipped":109,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:37.392: INFO: Only supported for providers [azure] (not aws)
... skipping 67 lines ...
Jul 17 21:08:28.664: INFO: PersistentVolumeClaim pvc-x9hpw found but phase is Pending instead of Bound.
Jul 17 21:08:30.770: INFO: PersistentVolumeClaim pvc-x9hpw found and phase=Bound (14.83918192s)
Jul 17 21:08:30.770: INFO: Waiting up to 3m0s for PersistentVolume local-lkx5k to have phase Bound
Jul 17 21:08:30.874: INFO: PersistentVolume local-lkx5k found and phase=Bound (103.856197ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nt7k
STEP: Creating a pod to test subpath
Jul 17 21:08:31.187: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nt7k" in namespace "provisioning-2816" to be "Succeeded or Failed"
Jul 17 21:08:31.291: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7k": Phase="Pending", Reason="", readiness=false. Elapsed: 104.08716ms
Jul 17 21:08:33.395: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208293586s
Jul 17 21:08:35.500: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313506455s
STEP: Saw pod success
Jul 17 21:08:35.500: INFO: Pod "pod-subpath-test-preprovisionedpv-nt7k" satisfied condition "Succeeded or Failed"
Jul 17 21:08:35.612: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-nt7k container test-container-subpath-preprovisionedpv-nt7k: <nil>
STEP: delete the pod
Jul 17 21:08:35.829: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nt7k to disappear
Jul 17 21:08:35.933: INFO: Pod pod-subpath-test-preprovisionedpv-nt7k no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nt7k
Jul 17 21:08:35.933: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nt7k" in namespace "provisioning-2816"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":32,"skipped":243,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:37.455: INFO: Only supported for providers [gce gke] (not aws)
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":21,"skipped":120,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:37.597: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":19,"skipped":116,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:37.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Jul 17 21:08:38.248: INFO: Waiting up to 5m0s for pod "client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590" in namespace "containers-3652" to be "Succeeded or Failed"
Jul 17 21:08:38.352: INFO: Pod "client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590": Phase="Pending", Reason="", readiness=false. Elapsed: 104.721684ms
Jul 17 21:08:40.458: INFO: Pod "client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210179678s
Jul 17 21:08:42.568: INFO: Pod "client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320135953s
Jul 17 21:08:44.674: INFO: Pod "client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590": Phase="Pending", Reason="", readiness=false. Elapsed: 6.426497567s
Jul 17 21:08:46.781: INFO: Pod "client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.533109835s
STEP: Saw pod success
Jul 17 21:08:46.781: INFO: Pod "client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590" satisfied condition "Succeeded or Failed"
Jul 17 21:08:46.886: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 21:08:47.103: INFO: Waiting for pod client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590 to disappear
Jul 17 21:08:47.207: INFO: Pod client-containers-a77cfab1-2b7c-4cc6-a7d8-60e3b860f590 no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.810 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":124,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:47.440: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 45 lines ...
Jul 17 21:08:14.788: INFO: PersistentVolumeClaim pvc-mj4m6 found but phase is Pending instead of Bound.
Jul 17 21:08:16.893: INFO: PersistentVolumeClaim pvc-mj4m6 found and phase=Bound (16.945682139s)
Jul 17 21:08:16.893: INFO: Waiting up to 3m0s for PersistentVolume local-hkw8l to have phase Bound
Jul 17 21:08:16.998: INFO: PersistentVolume local-hkw8l found and phase=Bound (104.674479ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rdmc
STEP: Creating a pod to test atomic-volume-subpath
Jul 17 21:08:17.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rdmc" in namespace "provisioning-30" to be "Succeeded or Failed"
Jul 17 21:08:17.419: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Pending", Reason="", readiness=false. Elapsed: 106.568125ms
Jul 17 21:08:19.525: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212622666s
Jul 17 21:08:21.630: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 4.317467236s
Jul 17 21:08:23.735: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 6.422989719s
Jul 17 21:08:25.839: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 8.527352845s
Jul 17 21:08:27.944: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 10.631745175s
... skipping 4 lines ...
Jul 17 21:08:38.467: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 21.15537283s
Jul 17 21:08:40.572: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 23.26036131s
Jul 17 21:08:42.677: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 25.365129215s
Jul 17 21:08:44.782: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Running", Reason="", readiness=true. Elapsed: 27.469860322s
Jul 17 21:08:46.886: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.574260677s
STEP: Saw pod success
Jul 17 21:08:46.886: INFO: Pod "pod-subpath-test-preprovisionedpv-rdmc" satisfied condition "Succeeded or Failed"
Jul 17 21:08:46.991: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-rdmc container test-container-subpath-preprovisionedpv-rdmc: <nil>
STEP: delete the pod
Jul 17 21:08:47.205: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rdmc to disappear
Jul 17 21:08:47.310: INFO: Pod pod-subpath-test-preprovisionedpv-rdmc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rdmc
Jul 17 21:08:47.310: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rdmc" in namespace "provisioning-30"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":32,"skipped":183,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:48.765: INFO: Only supported for providers [openstack] (not aws)
... skipping 71 lines ...
Jul 17 21:08:00.334: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul 17 21:08:00.440: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathg6rrh] to have phase Bound
Jul 17 21:08:00.542: INFO: PersistentVolumeClaim csi-hostpathg6rrh found but phase is Pending instead of Bound.
Jul 17 21:08:02.645: INFO: PersistentVolumeClaim csi-hostpathg6rrh found and phase=Bound (2.204607677s)
STEP: Creating pod pod-subpath-test-dynamicpv-c7zk
STEP: Creating a pod to test subpath
Jul 17 21:08:02.956: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-c7zk" in namespace "provisioning-1963" to be "Succeeded or Failed"
Jul 17 21:08:03.058: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 102.399451ms
Jul 17 21:08:05.161: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205377346s
Jul 17 21:08:07.266: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309506635s
Jul 17 21:08:09.376: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41960972s
Jul 17 21:08:11.484: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527547306s
Jul 17 21:08:13.587: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.630446063s
Jul 17 21:08:15.689: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.733232514s
Jul 17 21:08:17.793: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.836732048s
Jul 17 21:08:19.899: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.943384304s
Jul 17 21:08:22.003: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.046465568s
Jul 17 21:08:24.111: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Pending", Reason="", readiness=false. Elapsed: 21.154527207s
Jul 17 21:08:26.213: INFO: Pod "pod-subpath-test-dynamicpv-c7zk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.256881356s
STEP: Saw pod success
Jul 17 21:08:26.213: INFO: Pod "pod-subpath-test-dynamicpv-c7zk" satisfied condition "Succeeded or Failed"
Jul 17 21:08:26.315: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-dynamicpv-c7zk container test-container-subpath-dynamicpv-c7zk: <nil>
STEP: delete the pod
Jul 17 21:08:26.537: INFO: Waiting for pod pod-subpath-test-dynamicpv-c7zk to disappear
Jul 17 21:08:26.639: INFO: Pod pod-subpath-test-dynamicpv-c7zk no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-c7zk
Jul 17 21:08:26.639: INFO: Deleting pod "pod-subpath-test-dynamicpv-c7zk" in namespace "provisioning-1963"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":23,"skipped":167,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:49.112: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
Jul 17 21:08:43.074: INFO: PersistentVolumeClaim pvc-dshxl found but phase is Pending instead of Bound.
Jul 17 21:08:45.178: INFO: PersistentVolumeClaim pvc-dshxl found and phase=Bound (8.52193s)
Jul 17 21:08:45.178: INFO: Waiting up to 3m0s for PersistentVolume local-28fp5 to have phase Bound
Jul 17 21:08:45.282: INFO: PersistentVolume local-28fp5 found and phase=Bound (103.714564ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-g5xj
STEP: Creating a pod to test subpath
Jul 17 21:08:45.594: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-g5xj" in namespace "provisioning-4248" to be "Succeeded or Failed"
Jul 17 21:08:45.699: INFO: Pod "pod-subpath-test-preprovisionedpv-g5xj": Phase="Pending", Reason="", readiness=false. Elapsed: 104.228519ms
Jul 17 21:08:47.804: INFO: Pod "pod-subpath-test-preprovisionedpv-g5xj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209308003s
Jul 17 21:08:49.910: INFO: Pod "pod-subpath-test-preprovisionedpv-g5xj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315573043s
Jul 17 21:08:52.015: INFO: Pod "pod-subpath-test-preprovisionedpv-g5xj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42071147s
STEP: Saw pod success
Jul 17 21:08:52.015: INFO: Pod "pod-subpath-test-preprovisionedpv-g5xj" satisfied condition "Succeeded or Failed"
Jul 17 21:08:52.120: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-g5xj container test-container-subpath-preprovisionedpv-g5xj: <nil>
STEP: delete the pod
Jul 17 21:08:52.333: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-g5xj to disappear
Jul 17 21:08:52.438: INFO: Pod pod-subpath-test-preprovisionedpv-g5xj no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-g5xj
Jul 17 21:08:52.439: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-g5xj" in namespace "provisioning-4248"
... skipping 33 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Jul 17 21:08:49.735: INFO: Waiting up to 5m0s for pod "metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb" in namespace "projected-4314" to be "Succeeded or Failed"
Jul 17 21:08:49.837: INFO: Pod "metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.155945ms
Jul 17 21:08:51.940: INFO: Pod "metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204824431s
Jul 17 21:08:54.044: INFO: Pod "metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.308602955s
STEP: Saw pod success
Jul 17 21:08:54.044: INFO: Pod "metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb" satisfied condition "Succeeded or Failed"
Jul 17 21:08:54.146: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb container client-container: <nil>
STEP: delete the pod
Jul 17 21:08:54.358: INFO: Waiting for pod metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb to disappear
Jul 17 21:08:54.460: INFO: Pod metadata-volume-b6231f14-0be1-42ab-955a-e762d38880fb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.553 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":24,"skipped":168,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:54.690: INFO: Only supported for providers [openstack] (not aws)
... skipping 43 lines ...
Jul 17 21:08:44.373: INFO: PersistentVolumeClaim pvc-s5ljg found but phase is Pending instead of Bound.
Jul 17 21:08:46.477: INFO: PersistentVolumeClaim pvc-s5ljg found and phase=Bound (12.723809285s)
Jul 17 21:08:46.477: INFO: Waiting up to 3m0s for PersistentVolume local-gcdtp to have phase Bound
Jul 17 21:08:46.589: INFO: PersistentVolume local-gcdtp found and phase=Bound (112.215728ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4x5b
STEP: Creating a pod to test subpath
Jul 17 21:08:46.902: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4x5b" in namespace "provisioning-8136" to be "Succeeded or Failed"
Jul 17 21:08:47.005: INFO: Pod "pod-subpath-test-preprovisionedpv-4x5b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.192306ms
Jul 17 21:08:49.108: INFO: Pod "pod-subpath-test-preprovisionedpv-4x5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206850929s
Jul 17 21:08:51.213: INFO: Pod "pod-subpath-test-preprovisionedpv-4x5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311090835s
Jul 17 21:08:53.317: INFO: Pod "pod-subpath-test-preprovisionedpv-4x5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.415213715s
STEP: Saw pod success
Jul 17 21:08:53.317: INFO: Pod "pod-subpath-test-preprovisionedpv-4x5b" satisfied condition "Succeeded or Failed"
Jul 17 21:08:53.420: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-4x5b container test-container-volume-preprovisionedpv-4x5b: <nil>
STEP: delete the pod
Jul 17 21:08:53.633: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4x5b to disappear
Jul 17 21:08:53.740: INFO: Pod pod-subpath-test-preprovisionedpv-4x5b no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4x5b
Jul 17 21:08:53.740: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4x5b" in namespace "provisioning-8136"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":17,"skipped":129,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
Jul 17 21:08:47.981: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 17 21:08:48.087: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9dr5
STEP: Creating a pod to test subpath
Jul 17 21:08:48.196: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9dr5" in namespace "provisioning-2879" to be "Succeeded or Failed"
Jul 17 21:08:48.301: INFO: Pod "pod-subpath-test-inlinevolume-9dr5": Phase="Pending", Reason="", readiness=false. Elapsed: 105.224384ms
Jul 17 21:08:50.406: INFO: Pod "pod-subpath-test-inlinevolume-9dr5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210226354s
Jul 17 21:08:52.512: INFO: Pod "pod-subpath-test-inlinevolume-9dr5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316301686s
Jul 17 21:08:54.618: INFO: Pod "pod-subpath-test-inlinevolume-9dr5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.42213643s
STEP: Saw pod success
Jul 17 21:08:54.618: INFO: Pod "pod-subpath-test-inlinevolume-9dr5" satisfied condition "Succeeded or Failed"
Jul 17 21:08:54.723: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-9dr5 container test-container-subpath-inlinevolume-9dr5: <nil>
STEP: delete the pod
Jul 17 21:08:54.940: INFO: Waiting for pod pod-subpath-test-inlinevolume-9dr5 to disappear
Jul 17 21:08:55.050: INFO: Pod pod-subpath-test-inlinevolume-9dr5 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9dr5
Jul 17 21:08:55.050: INFO: Deleting pod "pod-subpath-test-inlinevolume-9dr5" in namespace "provisioning-2879"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":23,"skipped":127,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:55.491: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
Jul 17 21:08:34.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
Jul 17 21:08:35.097: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 17 21:08:35.307: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9808" in namespace "provisioning-9808" to be "Succeeded or Failed"
Jul 17 21:08:35.410: INFO: Pod "hostpath-symlink-prep-provisioning-9808": Phase="Pending", Reason="", readiness=false. Elapsed: 102.770431ms
Jul 17 21:08:37.520: INFO: Pod "hostpath-symlink-prep-provisioning-9808": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21287636s
Jul 17 21:08:39.624: INFO: Pod "hostpath-symlink-prep-provisioning-9808": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316737826s
STEP: Saw pod success
Jul 17 21:08:39.624: INFO: Pod "hostpath-symlink-prep-provisioning-9808" satisfied condition "Succeeded or Failed"
Jul 17 21:08:39.624: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9808" in namespace "provisioning-9808"
Jul 17 21:08:39.732: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9808" to be fully deleted
Jul 17 21:08:39.845: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zqth
Jul 17 21:08:44.162: INFO: Running '/tmp/kubectl842992387/kubectl --server=https://api.e2e-7d420e2b29-167d8.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=provisioning-9808 exec pod-subpath-test-inlinevolume-zqth --container test-container-volume-inlinevolume-zqth -- /bin/sh -c rm -r /test-volume/provisioning-9808'
Jul 17 21:08:45.291: INFO: stderr: ""
Jul 17 21:08:45.291: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-zqth
Jul 17 21:08:45.292: INFO: Deleting pod "pod-subpath-test-inlinevolume-zqth" in namespace "provisioning-9808"
Jul 17 21:08:45.395: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-zqth" to be fully deleted
STEP: Deleting pod
Jul 17 21:08:51.601: INFO: Deleting pod "pod-subpath-test-inlinevolume-zqth" in namespace "provisioning-9808"
Jul 17 21:08:51.808: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-9808" in namespace "provisioning-9808" to be "Succeeded or Failed"
Jul 17 21:08:51.911: INFO: Pod "hostpath-symlink-prep-provisioning-9808": Phase="Pending", Reason="", readiness=false. Elapsed: 102.5306ms
Jul 17 21:08:54.014: INFO: Pod "hostpath-symlink-prep-provisioning-9808": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205560808s
Jul 17 21:08:56.117: INFO: Pod "hostpath-symlink-prep-provisioning-9808": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309003374s
STEP: Saw pod success
Jul 17 21:08:56.117: INFO: Pod "hostpath-symlink-prep-provisioning-9808" satisfied condition "Succeeded or Failed"
Jul 17 21:08:56.117: INFO: Deleting pod "hostpath-symlink-prep-provisioning-9808" in namespace "provisioning-9808"
Jul 17 21:08:56.224: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-9808" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:08:56.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-9808" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":28,"skipped":217,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:25.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":29,"skipped":217,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:58.002: INFO: Only supported for providers [gce gke] (not aws)
... skipping 205 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:08:58.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-5976" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":24,"skipped":132,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:08:58.378: INFO: Only supported for providers [openstack] (not aws)
... skipping 92 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 21:08:55.821: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484" in namespace "projected-319" to be "Succeeded or Failed"
Jul 17 21:08:55.924: INFO: Pod "downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484": Phase="Pending", Reason="", readiness=false. Elapsed: 102.993204ms
Jul 17 21:08:58.028: INFO: Pod "downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206571345s
Jul 17 21:09:00.134: INFO: Pod "downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312032149s
Jul 17 21:09:02.239: INFO: Pod "downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.417500671s
STEP: Saw pod success
Jul 17 21:09:02.239: INFO: Pod "downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484" satisfied condition "Succeeded or Failed"
Jul 17 21:09:02.342: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484 container client-container: <nil>
STEP: delete the pod
Jul 17 21:09:02.560: INFO: Waiting for pod downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484 to disappear
Jul 17 21:09:02.679: INFO: Pod downwardapi-volume-e5139000-1c93-4e42-a6cf-bc5dadca5484 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.686 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":131,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:04.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-9145" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":19,"skipped":133,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:05.108: INFO: Driver hostPathSymlink doesn't support ext3 -- skipping
... skipping 102 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration","total":-1,"completed":21,"skipped":236,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil"]}
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:20.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":22,"skipped":236,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:05.379: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 44 lines ...
• [SLOW TEST:8.310 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":252,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:06.508: INFO: Driver local doesn't support ext4 -- skipping
... skipping 69 lines ...
Jul 17 21:09:05.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 17 21:09:05.817: INFO: Waiting up to 5m0s for pod "pod-6034fa0d-4b97-4636-81e3-9793ea84db59" in namespace "emptydir-8528" to be "Succeeded or Failed"
Jul 17 21:09:05.920: INFO: Pod "pod-6034fa0d-4b97-4636-81e3-9793ea84db59": Phase="Pending", Reason="", readiness=false. Elapsed: 102.725385ms
Jul 17 21:09:08.023: INFO: Pod "pod-6034fa0d-4b97-4636-81e3-9793ea84db59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.205746533s
STEP: Saw pod success
Jul 17 21:09:08.023: INFO: Pod "pod-6034fa0d-4b97-4636-81e3-9793ea84db59" satisfied condition "Succeeded or Failed"
Jul 17 21:09:08.126: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-6034fa0d-4b97-4636-81e3-9793ea84db59 container test-container: <nil>
STEP: delete the pod
Jul 17 21:09:08.338: INFO: Waiting for pod pod-6034fa0d-4b97-4636-81e3-9793ea84db59 to disappear
Jul 17 21:09:08.441: INFO: Pod pod-6034fa0d-4b97-4636-81e3-9793ea84db59 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:08.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8528" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":146,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:08.685: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 127 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:09.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":21,"skipped":163,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Jul 17 21:08:59.455: INFO: PersistentVolumeClaim pvc-vgs2l found but phase is Pending instead of Bound.
Jul 17 21:09:01.560: INFO: PersistentVolumeClaim pvc-vgs2l found and phase=Bound (2.211518934s)
Jul 17 21:09:01.560: INFO: Waiting up to 3m0s for PersistentVolume local-b9rs7 to have phase Bound
Jul 17 21:09:01.663: INFO: PersistentVolume local-b9rs7 found and phase=Bound (102.350176ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jwr7
STEP: Creating a pod to test subpath
Jul 17 21:09:01.976: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jwr7" in namespace "provisioning-6417" to be "Succeeded or Failed"
Jul 17 21:09:02.079: INFO: Pod "pod-subpath-test-preprovisionedpv-jwr7": Phase="Pending", Reason="", readiness=false. Elapsed: 103.429349ms
Jul 17 21:09:04.183: INFO: Pod "pod-subpath-test-preprovisionedpv-jwr7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207074983s
Jul 17 21:09:06.286: INFO: Pod "pod-subpath-test-preprovisionedpv-jwr7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.309915495s
STEP: Saw pod success
Jul 17 21:09:06.286: INFO: Pod "pod-subpath-test-preprovisionedpv-jwr7" satisfied condition "Succeeded or Failed"
Jul 17 21:09:06.388: INFO: Trying to get logs from node ip-172-20-55-234.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-jwr7 container test-container-volume-preprovisionedpv-jwr7: <nil>
STEP: delete the pod
Jul 17 21:09:06.860: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jwr7 to disappear
Jul 17 21:09:06.962: INFO: Pod pod-subpath-test-preprovisionedpv-jwr7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jwr7
Jul 17 21:09:06.962: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jwr7" in namespace "provisioning-6417"
... skipping 26 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":25,"skipped":173,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] health handlers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:10.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-6693" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":22,"skipped":167,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 18 lines ...
Jul 17 21:08:59.055: INFO: PersistentVolumeClaim pvc-r8jkc found but phase is Pending instead of Bound.
Jul 17 21:09:01.160: INFO: PersistentVolumeClaim pvc-r8jkc found and phase=Bound (8.527054238s)
Jul 17 21:09:01.160: INFO: Waiting up to 3m0s for PersistentVolume local-8nhwl to have phase Bound
Jul 17 21:09:01.264: INFO: PersistentVolume local-8nhwl found and phase=Bound (103.915792ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vtmh
STEP: Creating a pod to test subpath
Jul 17 21:09:01.577: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vtmh" in namespace "provisioning-3721" to be "Succeeded or Failed"
Jul 17 21:09:01.683: INFO: Pod "pod-subpath-test-preprovisionedpv-vtmh": Phase="Pending", Reason="", readiness=false. Elapsed: 106.008577ms
Jul 17 21:09:03.789: INFO: Pod "pod-subpath-test-preprovisionedpv-vtmh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211301985s
Jul 17 21:09:05.893: INFO: Pod "pod-subpath-test-preprovisionedpv-vtmh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315518488s
Jul 17 21:09:07.999: INFO: Pod "pod-subpath-test-preprovisionedpv-vtmh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42114442s
Jul 17 21:09:10.109: INFO: Pod "pod-subpath-test-preprovisionedpv-vtmh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.532058236s
STEP: Saw pod success
Jul 17 21:09:10.110: INFO: Pod "pod-subpath-test-preprovisionedpv-vtmh" satisfied condition "Succeeded or Failed"
Jul 17 21:09:10.213: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-vtmh container test-container-subpath-preprovisionedpv-vtmh: <nil>
STEP: delete the pod
Jul 17 21:09:10.630: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vtmh to disappear
Jul 17 21:09:10.735: INFO: Pod pod-subpath-test-preprovisionedpv-vtmh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vtmh
Jul 17 21:09:10.735: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vtmh" in namespace "provisioning-3721"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":33,"skipped":185,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:12.264: INFO: Only supported for providers [vsphere] (not aws)
... skipping 38 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:12.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4632" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":23,"skipped":168,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:12.564: INFO: Only supported for providers [gce gke] (not aws)
... skipping 62 lines ...
Jul 17 21:08:41.075: INFO: PersistentVolumeClaim pvc-m4qlv found and phase=Bound (102.822203ms)
Jul 17 21:08:41.075: INFO: Waiting up to 3m0s for PersistentVolume nfs-qd6g9 to have phase Bound
Jul 17 21:08:41.178: INFO: PersistentVolume nfs-qd6g9 found and phase=Bound (102.703432ms)
[It] should test that a PV becomes Available and is clean after the PVC is deleted.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
STEP: Writing to the volume.
Jul 17 21:08:41.487: INFO: Waiting up to 5m0s for pod "pvc-tester-9k9nt" in namespace "pv-9800" to be "Succeeded or Failed"
Jul 17 21:08:41.590: INFO: Pod "pvc-tester-9k9nt": Phase="Pending", Reason="", readiness=false. Elapsed: 102.654228ms
Jul 17 21:08:43.694: INFO: Pod "pvc-tester-9k9nt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20699037s
Jul 17 21:08:45.798: INFO: Pod "pvc-tester-9k9nt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310514074s
Jul 17 21:08:47.901: INFO: Pod "pvc-tester-9k9nt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.413795423s
STEP: Saw pod success
Jul 17 21:08:47.901: INFO: Pod "pvc-tester-9k9nt" satisfied condition "Succeeded or Failed"
STEP: Deleting the claim
Jul 17 21:08:47.901: INFO: Deleting pod "pvc-tester-9k9nt" in namespace "pv-9800"
Jul 17 21:08:48.007: INFO: Wait up to 5m0s for pod "pvc-tester-9k9nt" to be fully deleted
Jul 17 21:08:48.110: INFO: Deleting PVC pvc-m4qlv to trigger reclamation of PV 
Jul 17 21:08:48.110: INFO: Deleting PersistentVolumeClaim "pvc-m4qlv"
Jul 17 21:08:48.214: INFO: Waiting for reclaim process to complete.
... skipping 4 lines ...
Jul 17 21:08:54.627: INFO: PersistentVolume nfs-qd6g9 found and phase=Available (6.413381281s)
Jul 17 21:08:54.730: INFO: PV nfs-qd6g9 now in "Available" phase
STEP: Re-mounting the volume.
Jul 17 21:08:54.834: INFO: Waiting up to timeout=1m0s for PersistentVolumeClaims [pvc-t6k8t] to have phase Bound
Jul 17 21:08:54.938: INFO: PersistentVolumeClaim pvc-t6k8t found and phase=Bound (103.774381ms)
STEP: Verifying the mount has been cleaned.
Jul 17 21:08:55.041: INFO: Waiting up to 5m0s for pod "pvc-tester-lpdst" in namespace "pv-9800" to be "Succeeded or Failed"
Jul 17 21:08:55.144: INFO: Pod "pvc-tester-lpdst": Phase="Pending", Reason="", readiness=false. Elapsed: 102.87019ms
Jul 17 21:08:57.249: INFO: Pod "pvc-tester-lpdst": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207380786s
Jul 17 21:08:59.353: INFO: Pod "pvc-tester-lpdst": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31141627s
Jul 17 21:09:01.456: INFO: Pod "pvc-tester-lpdst": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414970867s
STEP: Saw pod success
Jul 17 21:09:01.456: INFO: Pod "pvc-tester-lpdst" satisfied condition "Succeeded or Failed"
Jul 17 21:09:01.456: INFO: Deleting pod "pvc-tester-lpdst" in namespace "pv-9800"
Jul 17 21:09:01.566: INFO: Wait up to 5m0s for pod "pvc-tester-lpdst" to be fully deleted
Jul 17 21:09:01.669: INFO: Pod exited without failure; the volume has been recycled.
Jul 17 21:09:01.669: INFO: Removing second PVC, waiting for the recycler to finish before cleanup.
Jul 17 21:09:01.669: INFO: Deleting PVC pvc-t6k8t to trigger reclamation of PV 
Jul 17 21:09:01.669: INFO: Deleting PersistentVolumeClaim "pvc-t6k8t"
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    when invoking the Recycle reclaim policy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:265
      should test that a PV becomes Available and is clean after the PVC is deleted.
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:283
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","total":-1,"completed":41,"skipped":248,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":25,"skipped":158,"failed":0}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:56.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:15.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9064" for this suite.


• [SLOW TEST:19.032 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:349
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":26,"skipped":158,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 17 21:09:16.211: INFO: Waiting up to 5m0s for pod "pod-ed0db5a4-cf3a-4896-a5ca-188799275d46" in namespace "emptydir-3130" to be "Succeeded or Failed"
Jul 17 21:09:16.313: INFO: Pod "pod-ed0db5a4-cf3a-4896-a5ca-188799275d46": Phase="Pending", Reason="", readiness=false. Elapsed: 102.423209ms
Jul 17 21:09:18.417: INFO: Pod "pod-ed0db5a4-cf3a-4896-a5ca-188799275d46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206723748s
STEP: Saw pod success
Jul 17 21:09:18.418: INFO: Pod "pod-ed0db5a4-cf3a-4896-a5ca-188799275d46" satisfied condition "Succeeded or Failed"
Jul 17 21:09:18.520: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-ed0db5a4-cf3a-4896-a5ca-188799275d46 container test-container: <nil>
STEP: delete the pod
Jul 17 21:09:18.733: INFO: Waiting for pod pod-ed0db5a4-cf3a-4896-a5ca-188799275d46 to disappear
Jul 17 21:09:18.836: INFO: Pod pod-ed0db5a4-cf3a-4896-a5ca-188799275d46 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:18.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3130" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":27,"skipped":159,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:19.076: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 126 lines ...
• [SLOW TEST:12.308 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should support building a client with a CSR
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/certificates.go:55
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":31,"skipped":258,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:19.615: INFO: Only supported for providers [vsphere] (not aws)
... skipping 95 lines ...
STEP: creating a claim
Jul 17 21:08:09.338: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath4ktqj] to have phase Bound
Jul 17 21:08:09.444: INFO: PersistentVolumeClaim csi-hostpath4ktqj found but phase is Pending instead of Bound.
Jul 17 21:08:11.551: INFO: PersistentVolumeClaim csi-hostpath4ktqj found and phase=Bound (2.212700809s)
STEP: Expanding non-expandable pvc
Jul 17 21:08:11.759: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul 17 21:08:11.969: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:14.179: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:16.178: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:18.178: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:20.178: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:22.178: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:24.180: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:26.178: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:28.178: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:30.181: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:32.179: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:34.179: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:36.179: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:38.178: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:40.192: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:42.181: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul 17 21:08:42.392: INFO: Error updating pvc csi-hostpath4ktqj: persistentvolumeclaims "csi-hostpath4ktqj" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul 17 21:08:42.392: INFO: Deleting PersistentVolumeClaim "csi-hostpath4ktqj"
Jul 17 21:08:42.508: INFO: Waiting up to 5m0s for PersistentVolume pvc-951e6586-d74c-48b5-bc60-2d798824e537 to get deleted
Jul 17 21:08:42.615: INFO: PersistentVolume pvc-951e6586-d74c-48b5-bc60-2d798824e537 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-9294
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":30,"skipped":234,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:20.680: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 17 lines ...
• [SLOW TEST:22.784 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":25,"skipped":146,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":28,"skipped":144,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:54.599: INFO: >>> kubeConfig: /root/.kube/config
... skipping 21 lines ...
Jul 17 21:09:12.923: INFO: PersistentVolumeClaim pvc-58296 found but phase is Pending instead of Bound.
Jul 17 21:09:15.027: INFO: PersistentVolumeClaim pvc-58296 found and phase=Bound (10.648502246s)
Jul 17 21:09:15.027: INFO: Waiting up to 3m0s for PersistentVolume local-gcsm6 to have phase Bound
Jul 17 21:09:15.131: INFO: PersistentVolume local-gcsm6 found and phase=Bound (103.632799ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c44w
STEP: Creating a pod to test subpath
Jul 17 21:09:15.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c44w" in namespace "provisioning-5008" to be "Succeeded or Failed"
Jul 17 21:09:15.547: INFO: Pod "pod-subpath-test-preprovisionedpv-c44w": Phase="Pending", Reason="", readiness=false. Elapsed: 103.494357ms
Jul 17 21:09:17.654: INFO: Pod "pod-subpath-test-preprovisionedpv-c44w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20964953s
Jul 17 21:09:19.758: INFO: Pod "pod-subpath-test-preprovisionedpv-c44w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314555977s
Jul 17 21:09:21.863: INFO: Pod "pod-subpath-test-preprovisionedpv-c44w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418667042s
STEP: Saw pod success
Jul 17 21:09:21.863: INFO: Pod "pod-subpath-test-preprovisionedpv-c44w" satisfied condition "Succeeded or Failed"
Jul 17 21:09:21.966: INFO: Trying to get logs from node ip-172-20-56-168.eu-west-3.compute.internal pod pod-subpath-test-preprovisionedpv-c44w container test-container-volume-preprovisionedpv-c44w: <nil>
STEP: delete the pod
Jul 17 21:09:22.180: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c44w to disappear
Jul 17 21:09:22.284: INFO: Pod pod-subpath-test-preprovisionedpv-c44w no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c44w
Jul 17 21:09:22.284: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c44w" in namespace "provisioning-5008"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":29,"skipped":144,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:25.970: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 55 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":175,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:26.688: INFO: Only supported for providers [vsphere] (not aws)
... skipping 76 lines ...
• [SLOW TEST:5.899 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":26,"skipped":147,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:27.159: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 46 lines ...
• [SLOW TEST:13.303 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:53
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":42,"skipped":250,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:28.181: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:29.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4226" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":43,"skipped":259,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 41 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":20,"skipped":121,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:30.562: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":19,"skipped":127,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:08:17.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:72.877 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":20,"skipped":127,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:30.682: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 74 lines ...
Jul 17 21:09:21.197: INFO: Pod aws-client still exists
Jul 17 21:09:23.092: INFO: Waiting for pod aws-client to disappear
Jul 17 21:09:23.195: INFO: Pod aws-client still exists
Jul 17 21:09:25.093: INFO: Waiting for pod aws-client to disappear
Jul 17 21:09:25.196: INFO: Pod aws-client no longer exists
STEP: cleaning the environment after aws
Jul 17 21:09:25.774: INFO: Couldn't delete PD "aws://eu-west-3a/vol-05425577c92c7c4ec", sleeping 5s: error deleting EBS volumes: VolumeInUse: Volume vol-05425577c92c7c4ec is currently attached to i-02ba71dc56c4adb77
	status code: 400, request id: b3de838e-49a4-42dd-bb8c-463895b6ec4a
Jul 17 21:09:31.324: INFO: Successfully deleted PD "aws://eu-west-3a/vol-05425577c92c7c4ec".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:31.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-4120" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":27,"skipped":135,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:09:31.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:33.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-7071" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":28,"skipped":135,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:33.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":29,"skipped":137,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:33.756: INFO: Only supported for providers [azure] (not aws)
... skipping 69 lines ...
• [SLOW TEST:7.364 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":25,"skipped":183,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:34.107: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 183 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:211
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":30,"skipped":145,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":25,"skipped":173,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:36.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6822" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":26,"skipped":177,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:37.177: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 156 lines ...
Jul 17 21:09:05.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul 17 21:09:05.913: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul 17 21:09:06.125: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8302" in namespace "provisioning-8302" to be "Succeeded or Failed"
Jul 17 21:09:06.229: INFO: Pod "hostpath-symlink-prep-provisioning-8302": Phase="Pending", Reason="", readiness=false. Elapsed: 104.287589ms
Jul 17 21:09:08.334: INFO: Pod "hostpath-symlink-prep-provisioning-8302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208823514s
STEP: Saw pod success
Jul 17 21:09:08.334: INFO: Pod "hostpath-symlink-prep-provisioning-8302" satisfied condition "Succeeded or Failed"
Jul 17 21:09:08.334: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8302" in namespace "provisioning-8302"
Jul 17 21:09:08.441: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8302" to be fully deleted
Jul 17 21:09:08.545: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-sndk
STEP: Creating a pod to test atomic-volume-subpath
Jul 17 21:09:08.652: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-sndk" in namespace "provisioning-8302" to be "Succeeded or Failed"
Jul 17 21:09:08.756: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Pending", Reason="", readiness=false. Elapsed: 103.952171ms
Jul 17 21:09:10.868: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 2.216493757s
Jul 17 21:09:12.973: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 4.321245746s
Jul 17 21:09:15.079: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 6.426743529s
Jul 17 21:09:17.188: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 8.535965685s
Jul 17 21:09:19.293: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 10.641535572s
Jul 17 21:09:21.398: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 12.745808462s
Jul 17 21:09:23.506: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 14.854598485s
Jul 17 21:09:25.611: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 16.959052018s
Jul 17 21:09:27.716: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 19.063951117s
Jul 17 21:09:29.821: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Running", Reason="", readiness=true. Elapsed: 21.169212951s
Jul 17 21:09:31.926: INFO: Pod "pod-subpath-test-inlinevolume-sndk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.274023413s
STEP: Saw pod success
Jul 17 21:09:31.926: INFO: Pod "pod-subpath-test-inlinevolume-sndk" satisfied condition "Succeeded or Failed"
Jul 17 21:09:32.031: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod pod-subpath-test-inlinevolume-sndk container test-container-subpath-inlinevolume-sndk: <nil>
STEP: delete the pod
Jul 17 21:09:32.278: INFO: Waiting for pod pod-subpath-test-inlinevolume-sndk to disappear
Jul 17 21:09:32.383: INFO: Pod pod-subpath-test-inlinevolume-sndk no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-sndk
Jul 17 21:09:32.383: INFO: Deleting pod "pod-subpath-test-inlinevolume-sndk" in namespace "provisioning-8302"
STEP: Deleting pod
Jul 17 21:09:32.488: INFO: Deleting pod "pod-subpath-test-inlinevolume-sndk" in namespace "provisioning-8302"
Jul 17 21:09:32.722: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8302" in namespace "provisioning-8302" to be "Succeeded or Failed"
Jul 17 21:09:32.826: INFO: Pod "hostpath-symlink-prep-provisioning-8302": Phase="Pending", Reason="", readiness=false. Elapsed: 103.766628ms
Jul 17 21:09:34.931: INFO: Pod "hostpath-symlink-prep-provisioning-8302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208986839s
Jul 17 21:09:37.036: INFO: Pod "hostpath-symlink-prep-provisioning-8302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313332687s
STEP: Saw pod success
Jul 17 21:09:37.036: INFO: Pod "hostpath-symlink-prep-provisioning-8302" satisfied condition "Succeeded or Failed"
Jul 17 21:09:37.036: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8302" in namespace "provisioning-8302"
Jul 17 21:09:37.143: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8302" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:37.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8302" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":23,"skipped":238,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil"]}

S
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 160 lines ...
• [SLOW TEST:192.413 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not disrupt a cloud load-balancer's connectivity during rollout
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:158
------------------------------
{"msg":"PASSED [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","total":-1,"completed":26,"skipped":147,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:37.565: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":23,"skipped":122,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:37.974: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
• [SLOW TEST:60.951 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":248,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:38.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3552" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":27,"skipped":153,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:38.566: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 136 lines ...
Jul 17 21:09:28.494: INFO: PersistentVolumeClaim pvc-j8xkk found but phase is Pending instead of Bound.
Jul 17 21:09:30.599: INFO: PersistentVolumeClaim pvc-j8xkk found and phase=Bound (4.313239397s)
Jul 17 21:09:30.599: INFO: Waiting up to 3m0s for PersistentVolume local-4nmw9 to have phase Bound
Jul 17 21:09:30.703: INFO: PersistentVolume local-4nmw9 found and phase=Bound (103.856363ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-c789
STEP: Creating a pod to test exec-volume-test
Jul 17 21:09:31.016: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-c789" in namespace "volume-6358" to be "Succeeded or Failed"
Jul 17 21:09:31.120: INFO: Pod "exec-volume-test-preprovisionedpv-c789": Phase="Pending", Reason="", readiness=false. Elapsed: 104.726857ms
Jul 17 21:09:33.225: INFO: Pod "exec-volume-test-preprovisionedpv-c789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209137837s
Jul 17 21:09:35.330: INFO: Pod "exec-volume-test-preprovisionedpv-c789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313980219s
STEP: Saw pod success
Jul 17 21:09:35.330: INFO: Pod "exec-volume-test-preprovisionedpv-c789" satisfied condition "Succeeded or Failed"
Jul 17 21:09:35.434: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod exec-volume-test-preprovisionedpv-c789 container exec-container-preprovisionedpv-c789: <nil>
STEP: delete the pod
Jul 17 21:09:35.649: INFO: Waiting for pod exec-volume-test-preprovisionedpv-c789 to disappear
Jul 17 21:09:35.753: INFO: Pod exec-volume-test-preprovisionedpv-c789 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-c789
Jul 17 21:09:35.753: INFO: Deleting pod "exec-volume-test-preprovisionedpv-c789" in namespace "volume-6358"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":31,"skipped":238,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:39.425: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 125 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:179

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1566
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":24,"skipped":128,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:39.505: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 140 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:40.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-8879" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.","total":-1,"completed":32,"skipped":264,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:41.085: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:41.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-266" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":28,"skipped":171,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:41.380: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 77 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:41.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5449" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":132,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:09:34.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-a5c319b1-67a2-4623-adf3-1ee06caa9276
STEP: Creating a pod to test consume configMaps
Jul 17 21:09:34.885: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6" in namespace "configmap-2931" to be "Succeeded or Failed"
Jul 17 21:09:34.990: INFO: Pod "pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 104.889352ms
Jul 17 21:09:37.095: INFO: Pod "pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209747003s
Jul 17 21:09:39.199: INFO: Pod "pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314361478s
Jul 17 21:09:41.303: INFO: Pod "pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.418118563s
STEP: Saw pod success
Jul 17 21:09:41.303: INFO: Pod "pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6" satisfied condition "Succeeded or Failed"
Jul 17 21:09:41.407: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6 container agnhost-container: <nil>
STEP: delete the pod
Jul 17 21:09:41.625: INFO: Waiting for pod pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6 to disappear
Jul 17 21:09:41.729: INFO: Pod pod-configmaps-b3660138-a6ac-4a5c-96e4-b157ae9b1ce6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.815 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":189,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:41.963: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 52 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 21:09:42.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c5de64e-0d0e-4720-9d02-299ee6634972" in namespace "downward-api-6061" to be "Succeeded or Failed"
Jul 17 21:09:42.166: INFO: Pod "downwardapi-volume-4c5de64e-0d0e-4720-9d02-299ee6634972": Phase="Pending", Reason="", readiness=false. Elapsed: 103.782843ms
Jul 17 21:09:44.271: INFO: Pod "downwardapi-volume-4c5de64e-0d0e-4720-9d02-299ee6634972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208401994s
STEP: Saw pod success
Jul 17 21:09:44.271: INFO: Pod "downwardapi-volume-4c5de64e-0d0e-4720-9d02-299ee6634972" satisfied condition "Succeeded or Failed"
Jul 17 21:09:44.376: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod downwardapi-volume-4c5de64e-0d0e-4720-9d02-299ee6634972 container client-container: <nil>
STEP: delete the pod
Jul 17 21:09:44.601: INFO: Waiting for pod downwardapi-volume-4c5de64e-0d0e-4720-9d02-299ee6634972 to disappear
Jul 17 21:09:44.706: INFO: Pod downwardapi-volume-4c5de64e-0d0e-4720-9d02-299ee6634972 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:44.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6061" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":181,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:44.973: INFO: Only supported for providers [gce gke] (not aws)
... skipping 61 lines ...
Jul 17 21:08:08.658: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9398
Jul 17 21:08:08.760: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9398
Jul 17 21:08:08.864: INFO: creating *v1.StatefulSet: csi-mock-volumes-9398-8876/csi-mockplugin
Jul 17 21:08:08.968: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9398
Jul 17 21:08:09.072: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9398"
Jul 17 21:08:09.174: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9398 to register on node ip-172-20-55-234.eu-west-3.compute.internal
I0717 21:08:17.765292   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0717 21:08:17.872481   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9398","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0717 21:08:17.976296   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0717 21:08:18.079410   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0717 21:08:18.308286   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9398","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0717 21:08:18.849200   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9398"},"Error":"","FullError":null}
STEP: Creating pod
Jul 17 21:08:26.151: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0717 21:08:26.380650   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0717 21:08:26.486997   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c"}}},"Error":"","FullError":null}
I0717 21:08:29.436521   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 17 21:08:29.542: INFO: >>> kubeConfig: /root/.kube/config
I0717 21:08:30.285514   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c","storage.kubernetes.io/csiProvisionerIdentity":"1626556098131-8081-csi-mock-csi-mock-volumes-9398"}},"Response":{},"Error":"","FullError":null}
I0717 21:08:30.866407   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul 17 21:08:30.978: INFO: >>> kubeConfig: /root/.kube/config
Jul 17 21:08:31.679: INFO: >>> kubeConfig: /root/.kube/config
Jul 17 21:08:32.399: INFO: >>> kubeConfig: /root/.kube/config
I0717 21:08:33.120351   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c/globalmount","target_path":"/var/lib/kubelet/pods/3669dc50-4a43-41fd-9461-edcdbcddd9d9/volumes/kubernetes.io~csi/pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c","storage.kubernetes.io/csiProvisionerIdentity":"1626556098131-8081-csi-mock-csi-mock-volumes-9398"}},"Response":{},"Error":"","FullError":null}
Jul 17 21:08:34.566: INFO: Deleting pod "pvc-volume-tester-xqdfj" in namespace "csi-mock-volumes-9398"
Jul 17 21:08:34.671: INFO: Wait up to 5m0s for pod "pvc-volume-tester-xqdfj" to be fully deleted
Jul 17 21:08:38.121: INFO: >>> kubeConfig: /root/.kube/config
I0717 21:08:38.809043   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/3669dc50-4a43-41fd-9461-edcdbcddd9d9/volumes/kubernetes.io~csi/pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c/mount"},"Response":{},"Error":"","FullError":null}
I0717 21:08:38.923270   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0717 21:08:39.027580   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c/globalmount"},"Response":{},"Error":"","FullError":null}
I0717 21:08:43.005869   12300 csi.go:392] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul 17 21:08:43.983: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lwxtx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9398", SelfLink:"", UID:"d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c", ResourceVersion:"37169", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152906, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0055c1290), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0055c12a8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0029d2df0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0029d2e00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 21:08:43.983: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lwxtx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9398", SelfLink:"", UID:"d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c", ResourceVersion:"37172", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152906, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-55-234.eu-west-3.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00211b170), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00211b188)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00211b1a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00211b1b8)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002fd3a50), VolumeMode:(*v1.PersistentVolumeMode)(0xc002fd3a70), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 21:08:43.983: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lwxtx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9398", SelfLink:"", UID:"d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c", ResourceVersion:"37173", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152906, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9398", "volume.kubernetes.io/selected-node":"ip-172-20-55-234.eu-west-3.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a408), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a420)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a438), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a450)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a468), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a480)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc003119890), VolumeMode:(*v1.PersistentVolumeMode)(0xc0031198a0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 21:08:43.983: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lwxtx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9398", SelfLink:"", UID:"d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c", ResourceVersion:"37196", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152906, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9398", "volume.kubernetes.io/selected-node":"ip-172-20-55-234.eu-west-3.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a4b0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a4c8)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a4e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a4f8)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a510), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a528)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c", StorageClassName:(*string)(0xc0031198d0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0031198e0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul 17 21:08:43.983: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-lwxtx", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9398", SelfLink:"", UID:"d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c", ResourceVersion:"37197", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63762152906, loc:(*time.Location)(0x9ddf5a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9398", "volume.kubernetes.io/selected-node":"ip-172-20-55-234.eu-west-3.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a558), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a570)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a5a0)}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00596a5b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00596a5d0)}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-d3c2f21a-c7f9-4e94-bc5e-546174d8ba6c", StorageClassName:(*string)(0xc003119910), VolumeMode:(*v1.PersistentVolumeMode)(0xc003119920), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 49 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:900
    exhausted, late binding, no topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:958
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology","total":-1,"completed":35,"skipped":193,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:45.892: INFO: Driver local doesn't support ext3 -- skipping
... skipping 163 lines ...
• [SLOW TEST:19.422 seconds]
[sig-network] HostPort
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":153,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:46.629: INFO: Only supported for providers [gce gke] (not aws)
... skipping 43 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 21:09:36.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048" in namespace "downward-api-4014" to be "Succeeded or Failed"
Jul 17 21:09:36.636: INFO: Pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048": Phase="Pending", Reason="", readiness=false. Elapsed: 104.26017ms
Jul 17 21:09:38.739: INFO: Pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206871576s
Jul 17 21:09:40.845: INFO: Pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313365688s
Jul 17 21:09:42.950: INFO: Pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048": Phase="Running", Reason="", readiness=true. Elapsed: 6.417699919s
Jul 17 21:09:45.052: INFO: Pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048": Phase="Running", Reason="", readiness=true. Elapsed: 8.520286613s
Jul 17 21:09:47.155: INFO: Pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.622989516s
STEP: Saw pod success
Jul 17 21:09:47.155: INFO: Pod "downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048" satisfied condition "Succeeded or Failed"
Jul 17 21:09:47.257: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048 container client-container: <nil>
STEP: delete the pod
Jul 17 21:09:47.468: INFO: Waiting for pod downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048 to disappear
Jul 17 21:09:47.570: INFO: Pod downwardapi-volume-ccc5e049-94fb-4c26-9cc6-da67a8236048 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:11.874 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":146,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 74 lines ...
• [SLOW TEST:72.797 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":191,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 199 lines ...
• [SLOW TEST:9.797 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, absolute => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:267
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":26,"skipped":139,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:51.715: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:54.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1201" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":27,"skipped":140,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":103,"failed":0}
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:09:50.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
• [SLOW TEST:26.173 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":103,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:56.863: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Driver local doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":21,"skipped":128,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:56.873: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":28,"skipped":155,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:09:47.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
Jul 17 21:09:48.250: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-589ad7e0-d13e-43e6-af59-2522883a71dc" in namespace "security-context-test-8313" to be "Succeeded or Failed"
Jul 17 21:09:48.362: INFO: Pod "alpine-nnp-true-589ad7e0-d13e-43e6-af59-2522883a71dc": Phase="Pending", Reason="", readiness=false. Elapsed: 111.886722ms
Jul 17 21:09:50.467: INFO: Pod "alpine-nnp-true-589ad7e0-d13e-43e6-af59-2522883a71dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217590052s
Jul 17 21:09:52.572: INFO: Pod "alpine-nnp-true-589ad7e0-d13e-43e6-af59-2522883a71dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322537272s
Jul 17 21:09:54.679: INFO: Pod "alpine-nnp-true-589ad7e0-d13e-43e6-af59-2522883a71dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429265932s
Jul 17 21:09:56.785: INFO: Pod "alpine-nnp-true-589ad7e0-d13e-43e6-af59-2522883a71dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534889774s
Jul 17 21:09:56.785: INFO: Pod "alpine-nnp-true-589ad7e0-d13e-43e6-af59-2522883a71dc" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:09:56.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8313" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":29,"skipped":155,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:57.123: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 23 lines ...
Jul 17 21:09:45.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 17 21:09:46.603: INFO: Waiting up to 5m0s for pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5" in namespace "emptydir-8240" to be "Succeeded or Failed"
Jul 17 21:09:46.707: INFO: Pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 104.260128ms
Jul 17 21:09:48.811: INFO: Pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208672066s
Jul 17 21:09:50.922: INFO: Pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319534716s
Jul 17 21:09:53.031: INFO: Pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428540805s
Jul 17 21:09:55.136: INFO: Pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.533523826s
Jul 17 21:09:57.241: INFO: Pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.638240105s
STEP: Saw pod success
Jul 17 21:09:57.241: INFO: Pod "pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5" satisfied condition "Succeeded or Failed"
Jul 17 21:09:57.347: INFO: Trying to get logs from node ip-172-20-36-75.eu-west-3.compute.internal pod pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5 container test-container: <nil>
STEP: delete the pod
Jul 17 21:09:57.562: INFO: Waiting for pod pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5 to disappear
Jul 17 21:09:57.665: INFO: Pod pod-746bb73d-9a2f-410a-a9da-4a4e2aabd3e5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 31 lines ...
  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ubernetes_lite_volumes.go:47
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":196,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:09:57.891: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 174 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:10:00.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1482" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":32,"skipped":281,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul 17 21:09:58.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0eae128e-fec5-4dbf-a17f-02a6c4a6e71b" in namespace "projected-8200" to be "Succeeded or Failed"
Jul 17 21:09:58.653: INFO: Pod "downwardapi-volume-0eae128e-fec5-4dbf-a17f-02a6c4a6e71b": Phase="Pending", Reason="", readiness=false. Elapsed: 103.735508ms
Jul 17 21:10:00.759: INFO: Pod "downwardapi-volume-0eae128e-fec5-4dbf-a17f-02a6c4a6e71b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209191185s
STEP: Saw pod success
Jul 17 21:10:00.759: INFO: Pod "downwardapi-volume-0eae128e-fec5-4dbf-a17f-02a6c4a6e71b" satisfied condition "Succeeded or Failed"
Jul 17 21:10:00.863: INFO: Trying to get logs from node ip-172-20-38-184.eu-west-3.compute.internal pod downwardapi-volume-0eae128e-fec5-4dbf-a17f-02a6c4a6e71b container client-container: <nil>
STEP: delete the pod
Jul 17 21:10:01.085: INFO: Waiting for pod downwardapi-volume-0eae128e-fec5-4dbf-a17f-02a6c4a6e71b to disappear
Jul 17 21:10:01.190: INFO: Pod downwardapi-volume-0eae128e-fec5-4dbf-a17f-02a6c4a6e71b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:10:01.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8200" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":203,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:10:01.410: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 51 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul 17 21:10:01.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-5508" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s","total":-1,"completed":33,"skipped":285,"failed":1,"failures":["[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]"]}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":26,"skipped":177,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul 17 21:09:42.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 68 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:376
    should support port-forward
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:619
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":27,"skipped":177,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul 17 21:10:04.233: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 1673 lines ...
FailureDerror trying to reach service: dial tcp 172.20.36.55:80: i... (503; 30.20813782s)
Jul 17 21:09:57.450: INFO: (19) /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/: k8s

v1Statusp

FailureFerror trying to reach service: dial tcp 172.20.36.55:1080:... (503; 30.208100926s)
Jul 17 21:09:57.557: INFO: Pod proxy-service-l6cxg-28kgc has the following error logs: 
Jul 17 21:09:57.558: FAIL: 0 (503; 30.1075282s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.107878901s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108089421s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108414826s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108171205s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108545192s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108210264s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108348832s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108423718s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.108597945s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.208652208s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.208745507s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.208725393s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.208690998s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.20874208s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
0 (503; 30.208638622s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.109311786s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.109663682s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.109799282s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.110079912s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.109882708s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.110241878s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.110023675s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.110194361s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.110163877s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.110298466s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.209065615s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.209179769s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.20918876s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.209253686s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.209109598s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
1 (503; 30.20941703s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.105635889s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.108527775s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.108500149s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.1086174s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.109012469s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.108968138s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.110188335s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.110453737s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.11061568s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.110594894s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.208889051s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.208983093s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.210208796s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.214757219s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.214708701s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
2 (503; 30.214780543s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.107311608s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.107721004s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.107393503s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.107607166s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.110376181s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.110788824s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.110659757s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.110600212s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.11047369s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.110556197s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.208155039s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.208481341s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.208555315s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.208243628s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.208532187s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
3 (503; 30.208857069s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.107206227s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.107421935s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.107245359s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.107284654s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.107294094s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.107579349s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.107643102s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.109188366s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.109260225s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.109295409s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.208408728s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.208353774s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.208466417s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.210413667s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.210532792s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
4 (503; 30.210555122s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.109834706s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.11296311s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.113265039s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.113162967s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.11312872s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.113330925s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.113462928s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.118493957s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.119920949s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.119698061s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.215928837s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.216110171s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.216323271s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.220949851s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.220784031s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
5 (503; 30.22079375s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.111614617s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.111726986s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.113533321s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.113846953s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.115646551s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.115843474s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.115784562s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.115957755s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.115871823s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.115933008s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.2072835s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.20769989s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.207807499s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.208902139s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.208912802s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
6 (503; 30.210178265s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.113865245s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.113684829s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.113845941s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.114124825s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.114370898s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.114530839s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.114350694s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.114624193s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.114906355s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.115017659s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.114820993s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.115210911s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.115171308s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.115117565s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.208273352s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
7 (503; 30.208284853s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.107646289s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.107800417s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.108004553s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.107924062s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.10807349s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.107915187s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.108083425s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.10809955s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.108538097s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.108435734s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.207703722s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.207830099s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.208540387s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.208600124s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.208808419s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
8 (503; 30.208800439s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.105474199s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:462/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.107424296s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.107391379s): path /api/v1/namespaces/proxy-5059/pods/https:proxy-service-l6cxg-28kgc:460/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.107595786s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.107542752s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.109206821s): path /api/v1/namespaces/proxy-5059/services/proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.109269168s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.109500288s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.109350539s): path /api/v1/namespaces/proxy-5059/services/http:proxy-service-l6cxg:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.109318542s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.2077199s): path /api/v1/namespaces/proxy-5059/pods/proxy-service-l6cxg-28kgc:1080/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.207819642s): path /api/v1/namespaces/proxy-5059/pods/http:proxy-service-l6cxg-28kgc:160/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to handle the request Reason:ServiceUnavailable Details:&StatusDetails{Name:,Group:,Kind:,Causes:[]StatusCause{StatusCause{Type:UnexpectedServerResponse,Message:unknown,Field:,},},RetryAfterSeconds:0,UID:,} Code:503}
9 (503; 30.207976696s): path /api/v1/namespaces/proxy-5059/services/https:proxy-service-l6cxg:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion: Continue: RemainingItemCount:<nil>} Status:Failure Message:the server is currently unable to h