This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Enable IRSA for CCM
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-07-06 06:00
Elapsed1h3m
Revision02b3f7ca64bbd829814cd9afd5c223b1d8005139
Refs 11818

No Test Failures!


Error lines from build-log.txt

... skipping 488 lines ...
I0706 06:04:28.196414    4220 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0706 06:04:28.213965   11746 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 06:04:28.214194   11746 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 06:04:28.214231   11746 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
W0706 06:04:28.711784    4220 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0706 06:04:28.711856    4220 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --yes
I0706 06:04:28.730699   11757 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 06:04:28.730945   11757 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 06:04:28.730982   11757 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
I0706 06:04:29.236357    4220 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/06 06:04:29 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0706 06:04:29.244490    4220 http.go:37] curl https://ip.jsb.workers.dev
I0706 06:04:29.328513    4220 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.0-beta.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=kubenet --container-runtime=containerd --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.serviceAccountIssuerDiscovery.discoveryStore=s3://k8s-kops-prow/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery --override=cluster.spec.serviceAccountIssuerDiscovery.enableAWSOIDCProvider=true --admin-access 34.70.228.176/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-west-2a --master-size c5.large
I0706 06:04:29.348864   11767 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 06:04:29.348948   11767 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 06:04:29.348953   11767 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
I0706 06:04:29.396228   11767 create_cluster.go:740] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 33 lines ...
I0706 06:04:54.322999    4220 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0706 06:04:54.339230   11789 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0706 06:04:54.339313   11789 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0706 06:04:54.339318   11789 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
Validating cluster e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io

W0706 06:04:55.558607   11789 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:05:05.592010   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:05:15.627963   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:05:25.671997   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:05:35.703200   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:05:45.772160   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:05:55.801709   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:06:05.908968   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:06:15.952465   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:06:25.985290   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:06:36.014802   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:06:46.044732   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:06:56.093283   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:07:06.130330   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:07:16.161023   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:07:26.209519   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:07:36.240662   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:07:46.268308   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:07:56.300416   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:08:06.343000   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0706 06:08:16.373246   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 9 lines ...
Machine	i-06e58d734a80336fc								machine "i-06e58d734a80336fc" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-nnwtb					system-cluster-critical pod "coredns-autoscaler-6f594f4c58-nnwtb" is pending
Pod	kube-system/coredns-f45c4bf76-nk5jq						system-cluster-critical pod "coredns-f45c4bf76-nk5jq" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-b7qhf					system-cluster-critical pod "ebs-csi-controller-566c97f85c-b7qhf" is pending
Pod	kube-system/etcd-manager-events-ip-172-20-63-116.eu-west-2.compute.internal	system-cluster-critical pod "etcd-manager-events-ip-172-20-63-116.eu-west-2.compute.internal" is pending

Validation Failed
W0706 06:08:29.056634   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 10 lines ...
Node	ip-172-20-59-118.eu-west-2.compute.internal	node "ip-172-20-59-118.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/coredns-f45c4bf76-fsb5r		system-cluster-critical pod "coredns-f45c4bf76-fsb5r" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-b7qhf	system-cluster-critical pod "ebs-csi-controller-566c97f85c-b7qhf" is pending
Pod	kube-system/ebs-csi-node-2x6sf			system-node-critical pod "ebs-csi-node-2x6sf" is pending
Pod	kube-system/ebs-csi-node-x49qv			system-node-critical pod "ebs-csi-node-x49qv" is pending

Validation Failed
W0706 06:08:40.810660   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
W0706 06:08:50.868994   11789 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

NODE STATUS
... skipping 6 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Node	ip-172-20-32-57.eu-west-2.compute.internal	node "ip-172-20-32-57.eu-west-2.compute.internal" of role "node" is not ready
Pod	kube-system/ebs-csi-node-grgf4			system-node-critical pod "ebs-csi-node-grgf4" is pending

Validation Failed
W0706 06:09:02.759238   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 6 lines ...
ip-172-20-63-116.eu-west-2.compute.internal	master	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-56-54.eu-west-2.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-56-54.eu-west-2.compute.internal" is pending

Validation Failed
W0706 06:09:14.578563   11789 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-west-2a	Master	c5.large	1	1	eu-west-2a
nodes-eu-west-2a	Node	t3.medium	4	4	eu-west-2a

... skipping 200 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 308 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 264 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 287 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:11:51.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-7291" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:11:51.467: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:11:51.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1579" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:11:53.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4096" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:11:53.393: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 66 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803
    apply set/view last-applied
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:838
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:11:55.546: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 62 lines ...
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:11:55.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail when exceeds active deadline
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:255
STEP: Creating a job
STEP: Ensuring job past active deadline
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:11:58.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3020" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Job should fail when exceeds active deadline","total":-1,"completed":2,"skipped":21,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:11:58.512: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:11:59.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-4820" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":3,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:11:59.488: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 85 lines ...
• [SLOW TEST:10.127 seconds]
[sig-scheduling] LimitRange
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:00.663: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 67 lines ...
• [SLOW TEST:9.507 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:01.188: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 8 lines ...
Jul  6 06:11:51.635: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-4857d240-0daa-4b77-a62a-364302d7cc2c
STEP: Creating a pod to test consume configMaps
Jul  6 06:11:52.028: INFO: Waiting up to 5m0s for pod "pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b" in namespace "configmap-4927" to be "Succeeded or Failed"
Jul  6 06:11:52.124: INFO: Pod "pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b": Phase="Pending", Reason="", readiness=false. Elapsed: 96.22927ms
Jul  6 06:11:54.222: INFO: Pod "pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194072767s
Jul  6 06:11:56.319: INFO: Pod "pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290768646s
Jul  6 06:11:58.417: INFO: Pod "pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.38907506s
Jul  6 06:12:00.515: INFO: Pod "pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.487149984s
STEP: Saw pod success
Jul  6 06:12:00.515: INFO: Pod "pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b" satisfied condition "Succeeded or Failed"
Jul  6 06:12:00.612: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b container agnhost-container: <nil>
STEP: delete the pod
Jul  6 06:12:00.819: INFO: Waiting for pod pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b to disappear
Jul  6 06:12:00.915: INFO: Pod pod-configmaps-a40b620f-29de-45c4-b159-7c0a3a79368b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":14,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:01.252: INFO: Only supported for providers [openstack] (not aws)
... skipping 47 lines ...
• [SLOW TEST:13.428 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:03.948: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
W0706 06:11:50.820579   12520 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  6 06:11:50.820: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  6 06:11:51.112: INFO: Waiting up to 5m0s for pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934" in namespace "emptydir-3161" to be "Succeeded or Failed"
Jul  6 06:11:51.211: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934": Phase="Pending", Reason="", readiness=false. Elapsed: 98.968881ms
Jul  6 06:11:53.307: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195644668s
Jul  6 06:11:55.404: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292124708s
Jul  6 06:11:57.501: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389268037s
Jul  6 06:11:59.598: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934": Phase="Pending", Reason="", readiness=false. Elapsed: 8.486557931s
Jul  6 06:12:01.695: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934": Phase="Pending", Reason="", readiness=false. Elapsed: 10.583397768s
Jul  6 06:12:03.793: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.681169267s
STEP: Saw pod success
Jul  6 06:12:03.793: INFO: Pod "pod-7daa2778-a104-4a28-aff8-05c8e8310934" satisfied condition "Succeeded or Failed"
Jul  6 06:12:03.890: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-7daa2778-a104-4a28-aff8-05c8e8310934 container test-container: <nil>
STEP: delete the pod
Jul  6 06:12:04.416: INFO: Waiting for pod pod-7daa2778-a104-4a28-aff8-05c8e8310934 to disappear
Jul  6 06:12:04.512: INFO: Pod pod-7daa2778-a104-4a28-aff8-05c8e8310934 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.381 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox Pod with hostAliases
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:05.997: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 66 lines ...
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:12:06.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul  6 06:12:06.620: INFO: found topology map[topology.kubernetes.io/zone:eu-west-2a]
Jul  6 06:12:06.620: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul  6 06:12:06.620: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 58 lines ...
Jul  6 06:11:51.028: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-969609a4-398f-4dea-9393-a39f6159b3ce
STEP: Creating a pod to test consume secrets
Jul  6 06:11:51.470: INFO: Waiting up to 5m0s for pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922" in namespace "secrets-9456" to be "Succeeded or Failed"
Jul  6 06:11:51.568: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Pending", Reason="", readiness=false. Elapsed: 97.846313ms
Jul  6 06:11:53.666: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195796558s
Jul  6 06:11:55.765: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294020085s
Jul  6 06:11:57.863: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Pending", Reason="", readiness=false. Elapsed: 6.392500647s
Jul  6 06:11:59.962: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Pending", Reason="", readiness=false. Elapsed: 8.491644314s
Jul  6 06:12:02.061: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Pending", Reason="", readiness=false. Elapsed: 10.589991146s
Jul  6 06:12:04.159: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Pending", Reason="", readiness=false. Elapsed: 12.688351658s
Jul  6 06:12:06.258: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.787588601s
STEP: Saw pod success
Jul  6 06:12:06.258: INFO: Pod "pod-secrets-691a000c-35a1-4754-bcec-8249168fc922" satisfied condition "Succeeded or Failed"
Jul  6 06:12:06.356: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-secrets-691a000c-35a1-4754-bcec-8249168fc922 container secret-volume-test: <nil>
STEP: delete the pod
Jul  6 06:12:06.560: INFO: Waiting for pod pod-secrets-691a000c-35a1-4754-bcec-8249168fc922 to disappear
Jul  6 06:12:06.657: INFO: Pod pod-secrets-691a000c-35a1-4754-bcec-8249168fc922 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.445 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:06.957: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 123 lines ...
• [SLOW TEST:17.250 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should test the lifecycle of a ReplicationController [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:07.720: INFO: Only supported for providers [vsphere] (not aws)
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":2,"skipped":3,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 23 lines ...
Jul  6 06:12:17.390: INFO: PersistentVolumeClaim pvc-sfmxd found but phase is Pending instead of Bound.
Jul  6 06:12:19.487: INFO: PersistentVolumeClaim pvc-sfmxd found and phase=Bound (14.777128999s)
Jul  6 06:12:19.487: INFO: Waiting up to 3m0s for PersistentVolume local-fc8zl to have phase Bound
Jul  6 06:12:19.583: INFO: PersistentVolume local-fc8zl found and phase=Bound (96.142372ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6sxx
STEP: Creating a pod to test exec-volume-test
Jul  6 06:12:19.874: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6sxx" in namespace "volume-1456" to be "Succeeded or Failed"
Jul  6 06:12:19.971: INFO: Pod "exec-volume-test-preprovisionedpv-6sxx": Phase="Pending", Reason="", readiness=false. Elapsed: 96.108709ms
Jul  6 06:12:22.067: INFO: Pod "exec-volume-test-preprovisionedpv-6sxx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.192631276s
STEP: Saw pod success
Jul  6 06:12:22.067: INFO: Pod "exec-volume-test-preprovisionedpv-6sxx" satisfied condition "Succeeded or Failed"
Jul  6 06:12:22.163: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-6sxx container exec-container-preprovisionedpv-6sxx: <nil>
STEP: delete the pod
Jul  6 06:12:22.367: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6sxx to disappear
Jul  6 06:12:22.464: INFO: Pod exec-volume-test-preprovisionedpv-6sxx no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6sxx
Jul  6 06:12:22.464: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6sxx" in namespace "volume-1456"
... skipping 85 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:23.794: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:24.348: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 225 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] files with FSGroup ownership should support (root,0644,tmpfs)
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:67
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  6 06:12:24.951: INFO: Waiting up to 5m0s for pod "pod-71ea7b12-efed-40ad-ac05-022a8791bff2" in namespace "emptydir-5154" to be "Succeeded or Failed"
Jul  6 06:12:25.055: INFO: Pod "pod-71ea7b12-efed-40ad-ac05-022a8791bff2": Phase="Pending", Reason="", readiness=false. Elapsed: 103.692534ms
Jul  6 06:12:27.154: INFO: Pod "pod-71ea7b12-efed-40ad-ac05-022a8791bff2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.202022504s
STEP: Saw pod success
Jul  6 06:12:27.154: INFO: Pod "pod-71ea7b12-efed-40ad-ac05-022a8791bff2" satisfied condition "Succeeded or Failed"
Jul  6 06:12:27.251: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-71ea7b12-efed-40ad-ac05-022a8791bff2 container test-container: <nil>
STEP: delete the pod
Jul  6 06:12:27.463: INFO: Waiting for pod pod-71ea7b12-efed-40ad-ac05-022a8791bff2 to disappear
Jul  6 06:12:27.560: INFO: Pod pod-71ea7b12-efed-40ad-ac05-022a8791bff2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:12:27.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-766" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-node] AppArmor
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 156 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":1,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:29.503: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
• [SLOW TEST:6.058 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:29.910: INFO: Only supported for providers [gce gke] (not aws)
... skipping 14 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":4,"skipped":30,"failed":0}
[BeforeEach] [sig-network] NetworkPolicy API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:12:27.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename networkpolicies
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:12:30.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-1178" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":5,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:30.341: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:12:32.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8390" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Jul  6 06:12:16.145: INFO: PersistentVolumeClaim pvc-kdszh found but phase is Pending instead of Bound.
Jul  6 06:12:18.243: INFO: PersistentVolumeClaim pvc-kdszh found and phase=Bound (8.48826723s)
Jul  6 06:12:18.243: INFO: Waiting up to 3m0s for PersistentVolume local-t8k29 to have phase Bound
Jul  6 06:12:18.340: INFO: PersistentVolume local-t8k29 found and phase=Bound (96.383502ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-lx2d
STEP: Creating a pod to test subpath
Jul  6 06:12:18.630: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-lx2d" in namespace "provisioning-72" to be "Succeeded or Failed"
Jul  6 06:12:18.744: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d": Phase="Pending", Reason="", readiness=false. Elapsed: 113.796812ms
Jul  6 06:12:20.842: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211465823s
Jul  6 06:12:22.940: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.309172914s
Jul  6 06:12:25.039: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.408225971s
Jul  6 06:12:27.135: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.504946646s
Jul  6 06:12:29.233: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.602433534s
Jul  6 06:12:31.330: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.699520337s
STEP: Saw pod success
Jul  6 06:12:31.330: INFO: Pod "pod-subpath-test-preprovisionedpv-lx2d" satisfied condition "Succeeded or Failed"
Jul  6 06:12:31.427: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-lx2d container test-container-subpath-preprovisionedpv-lx2d: <nil>
STEP: delete the pod
Jul  6 06:12:31.633: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-lx2d to disappear
Jul  6 06:12:31.730: INFO: Pod pod-subpath-test-preprovisionedpv-lx2d no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-lx2d
Jul  6 06:12:31.730: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-lx2d" in namespace "provisioning-72"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":1,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:33.247: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 25 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:12:35.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5889" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":37,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:36.181: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 142 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":11,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:47.267: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 127 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume without restarting pod if nodeExpansion=off
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:49.178: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 70 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      running a successful command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:512
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command","total":-1,"completed":2,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:49.664: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 40 lines ...
Jul  6 06:12:45.412: INFO: PersistentVolumeClaim pvc-92klg found but phase is Pending instead of Bound.
Jul  6 06:12:47.508: INFO: PersistentVolumeClaim pvc-92klg found and phase=Bound (4.287681327s)
Jul  6 06:12:47.508: INFO: Waiting up to 3m0s for PersistentVolume local-f56tn to have phase Bound
Jul  6 06:12:47.604: INFO: PersistentVolume local-f56tn found and phase=Bound (95.660102ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-q7ws
STEP: Creating a pod to test subpath
Jul  6 06:12:47.891: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-q7ws" in namespace "provisioning-8898" to be "Succeeded or Failed"
Jul  6 06:12:47.987: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ws": Phase="Pending", Reason="", readiness=false. Elapsed: 95.484948ms
Jul  6 06:12:50.084: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ws": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192421724s
Jul  6 06:12:52.179: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ws": Phase="Pending", Reason="", readiness=false. Elapsed: 4.288282642s
Jul  6 06:12:54.277: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ws": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.386229697s
STEP: Saw pod success
Jul  6 06:12:54.277: INFO: Pod "pod-subpath-test-preprovisionedpv-q7ws" satisfied condition "Succeeded or Failed"
Jul  6 06:12:54.373: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-q7ws container test-container-subpath-preprovisionedpv-q7ws: <nil>
STEP: delete the pod
Jul  6 06:12:54.572: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-q7ws to disappear
Jul  6 06:12:54.668: INFO: Pod pod-subpath-test-preprovisionedpv-q7ws no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-q7ws
Jul  6 06:12:54.668: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-q7ws" in namespace "provisioning-8898"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 21 lines ...
Jul  6 06:12:46.363: INFO: PersistentVolumeClaim pvc-jxmlk found but phase is Pending instead of Bound.
Jul  6 06:12:48.460: INFO: PersistentVolumeClaim pvc-jxmlk found and phase=Bound (14.782449802s)
Jul  6 06:12:48.460: INFO: Waiting up to 3m0s for PersistentVolume local-ds75d to have phase Bound
Jul  6 06:12:48.556: INFO: PersistentVolume local-ds75d found and phase=Bound (96.051318ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pnfs
STEP: Creating a pod to test subpath
Jul  6 06:12:48.848: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pnfs" in namespace "provisioning-8446" to be "Succeeded or Failed"
Jul  6 06:12:48.945: INFO: Pod "pod-subpath-test-preprovisionedpv-pnfs": Phase="Pending", Reason="", readiness=false. Elapsed: 96.265976ms
Jul  6 06:12:51.042: INFO: Pod "pod-subpath-test-preprovisionedpv-pnfs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193875798s
Jul  6 06:12:53.140: INFO: Pod "pod-subpath-test-preprovisionedpv-pnfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291305182s
STEP: Saw pod success
Jul  6 06:12:53.140: INFO: Pod "pod-subpath-test-preprovisionedpv-pnfs" satisfied condition "Succeeded or Failed"
Jul  6 06:12:53.238: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-pnfs container test-container-subpath-preprovisionedpv-pnfs: <nil>
STEP: delete the pod
Jul  6 06:12:53.443: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pnfs to disappear
Jul  6 06:12:53.540: INFO: Pod pod-subpath-test-preprovisionedpv-pnfs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pnfs
Jul  6 06:12:53.540: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pnfs" in namespace "provisioning-8446"
STEP: Creating pod pod-subpath-test-preprovisionedpv-pnfs
STEP: Creating a pod to test subpath
Jul  6 06:12:53.734: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pnfs" in namespace "provisioning-8446" to be "Succeeded or Failed"
Jul  6 06:12:53.830: INFO: Pod "pod-subpath-test-preprovisionedpv-pnfs": Phase="Pending", Reason="", readiness=false. Elapsed: 95.954951ms
Jul  6 06:12:55.927: INFO: Pod "pod-subpath-test-preprovisionedpv-pnfs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.193020069s
STEP: Saw pod success
Jul  6 06:12:55.927: INFO: Pod "pod-subpath-test-preprovisionedpv-pnfs" satisfied condition "Succeeded or Failed"
Jul  6 06:12:56.023: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-pnfs container test-container-subpath-preprovisionedpv-pnfs: <nil>
STEP: delete the pod
Jul  6 06:12:56.225: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pnfs to disappear
Jul  6 06:12:56.321: INFO: Pod pod-subpath-test-preprovisionedpv-pnfs no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pnfs
Jul  6 06:12:56.321: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pnfs" in namespace "provisioning-8446"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":26,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:51.534 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:58.604: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:59.110: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":2,"skipped":30,"failed":0}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:12:27.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:31.285 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be ready immediately after startupProbe succeeds
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:406
------------------------------
{"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":3,"skipped":30,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:12:59.205: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 23 lines ...
Jul  6 06:12:29.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
Jul  6 06:12:30.055: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 06:12:30.253: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4888" in namespace "provisioning-4888" to be "Succeeded or Failed"
Jul  6 06:12:30.350: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 97.034939ms
Jul  6 06:12:32.449: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196310877s
Jul  6 06:12:34.547: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294605495s
Jul  6 06:12:36.646: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393689487s
Jul  6 06:12:38.745: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 8.492193638s
Jul  6 06:12:40.843: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 10.59073257s
Jul  6 06:12:42.941: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 12.68878706s
Jul  6 06:12:45.040: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.787055469s
STEP: Saw pod success
Jul  6 06:12:45.040: INFO: Pod "hostpath-symlink-prep-provisioning-4888" satisfied condition "Succeeded or Failed"
Jul  6 06:12:45.040: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4888" in namespace "provisioning-4888"
Jul  6 06:12:45.144: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4888" to be fully deleted
Jul  6 06:12:45.241: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-gw66
STEP: Creating a pod to test subpath
Jul  6 06:12:45.342: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-gw66" in namespace "provisioning-4888" to be "Succeeded or Failed"
Jul  6 06:12:45.439: INFO: Pod "pod-subpath-test-inlinevolume-gw66": Phase="Pending", Reason="", readiness=false. Elapsed: 97.32543ms
Jul  6 06:12:47.537: INFO: Pod "pod-subpath-test-inlinevolume-gw66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194836086s
Jul  6 06:12:49.634: INFO: Pod "pod-subpath-test-inlinevolume-gw66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292378515s
Jul  6 06:12:51.732: INFO: Pod "pod-subpath-test-inlinevolume-gw66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389938815s
Jul  6 06:12:53.829: INFO: Pod "pod-subpath-test-inlinevolume-gw66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.487556831s
STEP: Saw pod success
Jul  6 06:12:53.829: INFO: Pod "pod-subpath-test-inlinevolume-gw66" satisfied condition "Succeeded or Failed"
Jul  6 06:12:53.927: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-gw66 container test-container-subpath-inlinevolume-gw66: <nil>
STEP: delete the pod
Jul  6 06:12:54.130: INFO: Waiting for pod pod-subpath-test-inlinevolume-gw66 to disappear
Jul  6 06:12:54.227: INFO: Pod pod-subpath-test-inlinevolume-gw66 no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-gw66
Jul  6 06:12:54.227: INFO: Deleting pod "pod-subpath-test-inlinevolume-gw66" in namespace "provisioning-4888"
STEP: Deleting pod
Jul  6 06:12:54.324: INFO: Deleting pod "pod-subpath-test-inlinevolume-gw66" in namespace "provisioning-4888"
Jul  6 06:12:54.519: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-4888" in namespace "provisioning-4888" to be "Succeeded or Failed"
Jul  6 06:12:54.616: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 97.257421ms
Jul  6 06:12:56.716: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196970112s
Jul  6 06:12:58.814: INFO: Pod "hostpath-symlink-prep-provisioning-4888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294884449s
STEP: Saw pod success
Jul  6 06:12:58.814: INFO: Pod "hostpath-symlink-prep-provisioning-4888" satisfied condition "Succeeded or Failed"
Jul  6 06:12:58.814: INFO: Deleting pod "hostpath-symlink-prep-provisioning-4888" in namespace "provisioning-4888"
Jul  6 06:12:58.920: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-4888" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:12:59.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-4888" for this suite.
... skipping 17 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Jul  6 06:12:58.004: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1385" to be "Succeeded or Failed"
Jul  6 06:12:58.099: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 95.413998ms
Jul  6 06:13:00.195: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.191450541s
STEP: Saw pod success
Jul  6 06:13:00.195: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  6 06:13:00.291: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jul  6 06:13:00.490: INFO: Waiting for pod pod-host-path-test to disappear
Jul  6 06:13:00.585: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:00.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1385" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":3,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:00.792: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 46 lines ...
Jul  6 06:12:58.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  6 06:12:59.195: INFO: Waiting up to 5m0s for pod "pod-df48f14b-7c71-4497-b55d-ad0021a4f7e0" in namespace "emptydir-3157" to be "Succeeded or Failed"
Jul  6 06:12:59.291: INFO: Pod "pod-df48f14b-7c71-4497-b55d-ad0021a4f7e0": Phase="Pending", Reason="", readiness=false. Elapsed: 96.617799ms
Jul  6 06:13:01.389: INFO: Pod "pod-df48f14b-7c71-4497-b55d-ad0021a4f7e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194289676s
STEP: Saw pod success
Jul  6 06:13:01.389: INFO: Pod "pod-df48f14b-7c71-4497-b55d-ad0021a4f7e0" satisfied condition "Succeeded or Failed"
Jul  6 06:13:01.487: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod pod-df48f14b-7c71-4497-b55d-ad0021a4f7e0 container test-container: <nil>
STEP: delete the pod
Jul  6 06:13:01.689: INFO: Waiting for pod pod-df48f14b-7c71-4497-b55d-ad0021a4f7e0 to disappear
Jul  6 06:13:01.786: INFO: Pod pod-df48f14b-7c71-4497-b55d-ad0021a4f7e0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:01.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3157" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:01.992: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","total":-1,"completed":4,"skipped":57,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] PrivilegedPod [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:5.137 seconds]
[sig-node] PrivilegedPod [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should enable privileged commands [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":5,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:02.858: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 22 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-ca3cb63a-7158-4dfc-8550-e86d6cbdeda6
STEP: Creating a pod to test consume configMaps
Jul  6 06:13:01.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b6208c8-974e-4df9-b737-11e4c881a16f" in namespace "configmap-8449" to be "Succeeded or Failed"
Jul  6 06:13:01.606: INFO: Pod "pod-configmaps-8b6208c8-974e-4df9-b737-11e4c881a16f": Phase="Pending", Reason="", readiness=false. Elapsed: 98.393815ms
Jul  6 06:13:03.703: INFO: Pod "pod-configmaps-8b6208c8-974e-4df9-b737-11e4c881a16f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195411679s
STEP: Saw pod success
Jul  6 06:13:03.703: INFO: Pod "pod-configmaps-8b6208c8-974e-4df9-b737-11e4c881a16f" satisfied condition "Succeeded or Failed"
Jul  6 06:13:03.798: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod pod-configmaps-8b6208c8-974e-4df9-b737-11e4c881a16f container agnhost-container: <nil>
STEP: delete the pod
Jul  6 06:13:03.998: INFO: Waiting for pod pod-configmaps-8b6208c8-974e-4df9-b737-11e4c881a16f to disappear
Jul  6 06:13:04.094: INFO: Pod pod-configmaps-8b6208c8-974e-4df9-b737-11e4c881a16f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:04.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8449" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":4,"skipped":39,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:04.314: INFO: Only supported for providers [azure] (not aws)
... skipping 40 lines ...
• [SLOW TEST:5.720 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":40,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:04.952: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 199 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:05.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2250" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource ","total":-1,"completed":6,"skipped":32,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 06:13:04.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-faf97740-9aa2-4ad4-9ec0-d8e64f6a9cc1" in namespace "downward-api-6278" to be "Succeeded or Failed"
Jul  6 06:13:05.002: INFO: Pod "downwardapi-volume-faf97740-9aa2-4ad4-9ec0-d8e64f6a9cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 95.470094ms
Jul  6 06:13:07.098: INFO: Pod "downwardapi-volume-faf97740-9aa2-4ad4-9ec0-d8e64f6a9cc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.192084329s
STEP: Saw pod success
Jul  6 06:13:07.098: INFO: Pod "downwardapi-volume-faf97740-9aa2-4ad4-9ec0-d8e64f6a9cc1" satisfied condition "Succeeded or Failed"
Jul  6 06:13:07.194: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod downwardapi-volume-faf97740-9aa2-4ad4-9ec0-d8e64f6a9cc1 container client-container: <nil>
STEP: delete the pod
Jul  6 06:13:07.391: INFO: Waiting for pod downwardapi-volume-faf97740-9aa2-4ad4-9ec0-d8e64f6a9cc1 to disappear
Jul  6 06:13:07.489: INFO: Pod downwardapi-volume-faf97740-9aa2-4ad4-9ec0-d8e64f6a9cc1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:07.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6278" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:07.761: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 117 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:08.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":6,"skipped":61,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:08.176: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 68 lines ...
Jul  6 06:13:02.242: INFO: PersistentVolumeClaim pvc-dc42b found but phase is Pending instead of Bound.
Jul  6 06:13:04.341: INFO: PersistentVolumeClaim pvc-dc42b found and phase=Bound (10.58967886s)
Jul  6 06:13:04.341: INFO: Waiting up to 3m0s for PersistentVolume local-tlfdb to have phase Bound
Jul  6 06:13:04.440: INFO: PersistentVolume local-tlfdb found and phase=Bound (98.675713ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pqsr
STEP: Creating a pod to test exec-volume-test
Jul  6 06:13:04.734: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pqsr" in namespace "volume-2384" to be "Succeeded or Failed"
Jul  6 06:13:04.831: INFO: Pod "exec-volume-test-preprovisionedpv-pqsr": Phase="Pending", Reason="", readiness=false. Elapsed: 97.108427ms
Jul  6 06:13:06.929: INFO: Pod "exec-volume-test-preprovisionedpv-pqsr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194976546s
STEP: Saw pod success
Jul  6 06:13:06.929: INFO: Pod "exec-volume-test-preprovisionedpv-pqsr" satisfied condition "Succeeded or Failed"
Jul  6 06:13:07.034: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod exec-volume-test-preprovisionedpv-pqsr container exec-container-preprovisionedpv-pqsr: <nil>
STEP: delete the pod
Jul  6 06:13:07.236: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pqsr to disappear
Jul  6 06:13:07.333: INFO: Pod exec-volume-test-preprovisionedpv-pqsr no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pqsr
Jul  6 06:13:07.333: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pqsr" in namespace "volume-2384"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:14.567 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:13:10.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Jul  6 06:13:10.832: INFO: Waiting up to 5m0s for pod "test-pod-b91c8dde-8631-41ec-8611-ced2ec940931" in namespace "svcaccounts-3114" to be "Succeeded or Failed"
Jul  6 06:13:10.933: INFO: Pod "test-pod-b91c8dde-8631-41ec-8611-ced2ec940931": Phase="Pending", Reason="", readiness=false. Elapsed: 100.151881ms
Jul  6 06:13:13.031: INFO: Pod "test-pod-b91c8dde-8631-41ec-8611-ced2ec940931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198150801s
Jul  6 06:13:15.130: INFO: Pod "test-pod-b91c8dde-8631-41ec-8611-ced2ec940931": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297200366s
Jul  6 06:13:17.228: INFO: Pod "test-pod-b91c8dde-8631-41ec-8611-ced2ec940931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.395126724s
STEP: Saw pod success
Jul  6 06:13:17.228: INFO: Pod "test-pod-b91c8dde-8631-41ec-8611-ced2ec940931" satisfied condition "Succeeded or Failed"
Jul  6 06:13:17.325: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod test-pod-b91c8dde-8631-41ec-8611-ced2ec940931 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 06:13:17.539: INFO: Waiting for pod test-pod-b91c8dde-8631-41ec-8611-ced2ec940931 to disappear
Jul  6 06:13:17.636: INFO: Pod test-pod-b91c8dde-8631-41ec-8611-ced2ec940931 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.597 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":3,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 45 lines ...
Jul  6 06:12:43.092: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-cmjcs] to have phase Bound
Jul  6 06:12:43.188: INFO: PersistentVolumeClaim pvc-cmjcs found and phase=Bound (96.429947ms)
STEP: Deleting the previously created pod
Jul  6 06:12:51.675: INFO: Deleting pod "pvc-volume-tester-4lzv4" in namespace "csi-mock-volumes-9732"
Jul  6 06:12:51.773: INFO: Wait up to 5m0s for pod "pvc-volume-tester-4lzv4" to be fully deleted
STEP: Checking CSI driver logs
Jul  6 06:12:56.066: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/1c1f341e-d04d-4788-8e0c-37ba65512d83/volumes/kubernetes.io~csi/pvc-5577c5c2-518a-4046-9cea-bb8f2211ab71/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-4lzv4
Jul  6 06:12:56.067: INFO: Deleting pod "pvc-volume-tester-4lzv4" in namespace "csi-mock-volumes-9732"
STEP: Deleting claim pvc-cmjcs
Jul  6 06:12:56.357: INFO: Waiting up to 2m0s for PersistentVolume pvc-5577c5c2-518a-4046-9cea-bb8f2211ab71 to get deleted
Jul  6 06:12:56.453: INFO: PersistentVolume pvc-5577c5c2-518a-4046-9cea-bb8f2211ab71 was removed
STEP: Deleting storageclass csi-mock-volumes-9732-scl2nmk
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":2,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:18.133: INFO: Only supported for providers [gce gke] (not aws)
... skipping 68 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-4331328f-5ec2-477d-a933-d6c815c8bd39
STEP: Creating a pod to test consume secrets
Jul  6 06:13:18.540: INFO: Waiting up to 5m0s for pod "pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4" in namespace "secrets-1635" to be "Succeeded or Failed"
Jul  6 06:13:18.638: INFO: Pod "pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4": Phase="Pending", Reason="", readiness=false. Elapsed: 97.89604ms
Jul  6 06:13:20.736: INFO: Pod "pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195981225s
Jul  6 06:13:22.837: INFO: Pod "pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.296596264s
STEP: Saw pod success
Jul  6 06:13:22.837: INFO: Pod "pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4" satisfied condition "Succeeded or Failed"
Jul  6 06:13:22.934: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4 container secret-volume-test: <nil>
STEP: delete the pod
Jul  6 06:13:23.135: INFO: Waiting for pod pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4 to disappear
Jul  6 06:13:23.232: INFO: Pod pod-secrets-246b6bbb-5c38-4bab-8642-4af9894505e4 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.577 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Jul  6 06:13:15.693: INFO: PersistentVolumeClaim pvc-5hnbm found but phase is Pending instead of Bound.
Jul  6 06:13:17.790: INFO: PersistentVolumeClaim pvc-5hnbm found and phase=Bound (10.585125866s)
Jul  6 06:13:17.790: INFO: Waiting up to 3m0s for PersistentVolume local-b4z7z to have phase Bound
Jul  6 06:13:17.887: INFO: PersistentVolume local-b4z7z found and phase=Bound (96.771716ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mbkn
STEP: Creating a pod to test subpath
Jul  6 06:13:18.177: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mbkn" in namespace "provisioning-1008" to be "Succeeded or Failed"
Jul  6 06:13:18.275: INFO: Pod "pod-subpath-test-preprovisionedpv-mbkn": Phase="Pending", Reason="", readiness=false. Elapsed: 97.112313ms
Jul  6 06:13:20.373: INFO: Pod "pod-subpath-test-preprovisionedpv-mbkn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195303101s
Jul  6 06:13:22.470: INFO: Pod "pod-subpath-test-preprovisionedpv-mbkn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.292964991s
STEP: Saw pod success
Jul  6 06:13:22.471: INFO: Pod "pod-subpath-test-preprovisionedpv-mbkn" satisfied condition "Succeeded or Failed"
Jul  6 06:13:22.567: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-mbkn container test-container-volume-preprovisionedpv-mbkn: <nil>
STEP: delete the pod
Jul  6 06:13:22.774: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mbkn to disappear
Jul  6 06:13:22.872: INFO: Pod pod-subpath-test-preprovisionedpv-mbkn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mbkn
Jul  6 06:13:22.872: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mbkn" in namespace "provisioning-1008"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":5,"skipped":59,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:25.608: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:12:23.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
Jul  6 06:12:33.667: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-6dhbj] to have phase Bound
Jul  6 06:12:33.763: INFO: PersistentVolumeClaim pvc-6dhbj found and phase=Bound (96.39179ms)
STEP: Deleting the previously created pod
Jul  6 06:12:54.245: INFO: Deleting pod "pvc-volume-tester-nmsbp" in namespace "csi-mock-volumes-184"
Jul  6 06:12:54.343: INFO: Wait up to 5m0s for pod "pvc-volume-tester-nmsbp" to be fully deleted
STEP: Checking CSI driver logs
Jul  6 06:13:04.635: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7d84611b-51f6-44e1-9a6f-d6fea6d77e36/volumes/kubernetes.io~csi/pvc-c5622d8b-334a-44de-9fc8-ebfba6a5a725/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-nmsbp
Jul  6 06:13:04.635: INFO: Deleting pod "pvc-volume-tester-nmsbp" in namespace "csi-mock-volumes-184"
STEP: Deleting claim pvc-6dhbj
Jul  6 06:13:04.927: INFO: Waiting up to 2m0s for PersistentVolume pvc-c5622d8b-334a-44de-9fc8-ebfba6a5a725 to get deleted
Jul  6 06:13:05.024: INFO: PersistentVolume pvc-c5622d8b-334a-44de-9fc8-ebfba6a5a725 was removed
STEP: Deleting storageclass csi-mock-volumes-184-scpkv5p
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when podInfoOnMount=nil
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil","total":-1,"completed":2,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:26.870: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 37 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:11:52.215: INFO: >>> kubeConfig: /root/.kube/config
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":2,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 252 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":2,"skipped":5,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":3,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:13:34.452: INFO: >>> kubeConfig: /root/.kube/config
... skipping 78 lines ...
Jul  6 06:13:35.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  6 06:13:35.857: INFO: Waiting up to 5m0s for pod "pod-d3fdf580-791c-4d51-92ce-32b37ea554c2" in namespace "emptydir-9146" to be "Succeeded or Failed"
Jul  6 06:13:35.954: INFO: Pod "pod-d3fdf580-791c-4d51-92ce-32b37ea554c2": Phase="Pending", Reason="", readiness=false. Elapsed: 96.476935ms
Jul  6 06:13:38.050: INFO: Pod "pod-d3fdf580-791c-4d51-92ce-32b37ea554c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.193080262s
STEP: Saw pod success
Jul  6 06:13:38.050: INFO: Pod "pod-d3fdf580-791c-4d51-92ce-32b37ea554c2" satisfied condition "Succeeded or Failed"
Jul  6 06:13:38.147: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-d3fdf580-791c-4d51-92ce-32b37ea554c2 container test-container: <nil>
STEP: delete the pod
Jul  6 06:13:38.344: INFO: Waiting for pod pod-d3fdf580-791c-4d51-92ce-32b37ea554c2 to disappear
Jul  6 06:13:38.440: INFO: Pod pod-d3fdf580-791c-4d51-92ce-32b37ea554c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:38.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9146" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:38.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7758" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:38.866: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 35 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:39.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1067" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:13:39.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192
Jul  6 06:13:40.311: INFO: found topology map[topology.kubernetes.io/zone:eu-west-2a]
Jul  6 06:13:40.311: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul  6 06:13:40.311: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
STEP: Deleting sc
... skipping 7 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Not enough topologies in cluster -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:199
------------------------------
... skipping 34 lines ...
• [SLOW TEST:15.727 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":6,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:41.372: INFO: Only supported for providers [openstack] (not aws)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 43 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:41.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9966" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":6,"skipped":26,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] health handlers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:42.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-6099" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":7,"skipped":88,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Destroying namespace "apply-3677" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for subresources","total":-1,"completed":7,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 53 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:45.702: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 62 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":7,"skipped":71,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:47.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8124" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 367 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:13:58.689: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 138 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:13:59.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4776" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":6,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":9,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:01.088: INFO: Only supported for providers [openstack] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":1,"skipped":11,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:13:52.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:993
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:06.577: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 36 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7116-crds.webhook.example.com via the AdmissionRegistration API
Jul  6 06:13:17.503: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:13:27.800: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:13:38.001: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:13:48.207: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:13:58.408: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:13:58.408: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 425 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:13:58.408: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":3,"skipped":59,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:07.756: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:106
STEP: Creating a pod to test downward API volume plugin
Jul  6 06:14:07.212: INFO: Waiting up to 5m0s for pod "metadata-volume-13d03c75-99c5-4239-9ef9-4a1e8166626f" in namespace "projected-5051" to be "Succeeded or Failed"
Jul  6 06:14:07.308: INFO: Pod "metadata-volume-13d03c75-99c5-4239-9ef9-4a1e8166626f": Phase="Pending", Reason="", readiness=false. Elapsed: 95.970107ms
Jul  6 06:14:09.409: INFO: Pod "metadata-volume-13d03c75-99c5-4239-9ef9-4a1e8166626f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.196816689s
STEP: Saw pod success
Jul  6 06:14:09.409: INFO: Pod "metadata-volume-13d03c75-99c5-4239-9ef9-4a1e8166626f" satisfied condition "Succeeded or Failed"
Jul  6 06:14:09.504: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod metadata-volume-13d03c75-99c5-4239-9ef9-4a1e8166626f container client-container: <nil>
STEP: delete the pod
Jul  6 06:14:09.714: INFO: Waiting for pod metadata-volume-13d03c75-99c5-4239-9ef9-4a1e8166626f to disappear
Jul  6 06:14:09.810: INFO: Pod metadata-volume-13d03c75-99c5-4239-9ef9-4a1e8166626f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:14:09.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5051" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
... skipping 60 lines ...
Jul  6 06:13:10.659: INFO: PersistentVolumeClaim csi-hostpath7pxst found but phase is Pending instead of Bound.
Jul  6 06:13:12.758: INFO: PersistentVolumeClaim csi-hostpath7pxst found but phase is Pending instead of Bound.
Jul  6 06:13:14.855: INFO: PersistentVolumeClaim csi-hostpath7pxst found but phase is Pending instead of Bound.
Jul  6 06:13:16.954: INFO: PersistentVolumeClaim csi-hostpath7pxst found and phase=Bound (6.392792298s)
STEP: Creating pod pod-subpath-test-dynamicpv-9ssz
STEP: Creating a pod to test subpath
Jul  6 06:13:17.250: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9ssz" in namespace "provisioning-4881" to be "Succeeded or Failed"
Jul  6 06:13:17.347: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 97.446968ms
Jul  6 06:13:19.445: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195735418s
Jul  6 06:13:21.544: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293935546s
Jul  6 06:13:23.642: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391983327s
Jul  6 06:13:25.740: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489969755s
Jul  6 06:13:27.838: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.588841144s
Jul  6 06:13:29.936: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.686779012s
Jul  6 06:13:32.034: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.784830133s
Jul  6 06:13:34.135: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 16.885268477s
Jul  6 06:13:36.264: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 19.014582638s
Jul  6 06:13:38.368: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.11796237s
STEP: Saw pod success
Jul  6 06:13:38.368: INFO: Pod "pod-subpath-test-dynamicpv-9ssz" satisfied condition "Succeeded or Failed"
Jul  6 06:13:38.473: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-9ssz container test-container-subpath-dynamicpv-9ssz: <nil>
STEP: delete the pod
Jul  6 06:13:38.680: INFO: Waiting for pod pod-subpath-test-dynamicpv-9ssz to disappear
Jul  6 06:13:38.777: INFO: Pod pod-subpath-test-dynamicpv-9ssz no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9ssz
Jul  6 06:13:38.777: INFO: Deleting pod "pod-subpath-test-dynamicpv-9ssz" in namespace "provisioning-4881"
STEP: Creating pod pod-subpath-test-dynamicpv-9ssz
STEP: Creating a pod to test subpath
Jul  6 06:13:38.981: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9ssz" in namespace "provisioning-4881" to be "Succeeded or Failed"
Jul  6 06:13:39.084: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 102.420926ms
Jul  6 06:13:41.181: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199951893s
Jul  6 06:13:43.280: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298138164s
Jul  6 06:13:45.378: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396076769s
Jul  6 06:13:47.476: INFO: Pod "pod-subpath-test-dynamicpv-9ssz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.494374571s
STEP: Saw pod success
Jul  6 06:13:47.476: INFO: Pod "pod-subpath-test-dynamicpv-9ssz" satisfied condition "Succeeded or Failed"
Jul  6 06:13:47.573: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-9ssz container test-container-subpath-dynamicpv-9ssz: <nil>
STEP: delete the pod
Jul  6 06:13:47.781: INFO: Waiting for pod pod-subpath-test-dynamicpv-9ssz to disappear
Jul  6 06:13:47.881: INFO: Pod pod-subpath-test-dynamicpv-9ssz no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9ssz
Jul  6 06:13:47.881: INFO: Deleting pod "pod-subpath-test-dynamicpv-9ssz" in namespace "provisioning-4881"
... skipping 209 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":3,"skipped":21,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:12.370: INFO: Only supported for providers [vsphere] (not aws)
... skipping 90 lines ...
Jul  6 06:14:02.164: INFO: PersistentVolumeClaim pvc-q86zs found but phase is Pending instead of Bound.
Jul  6 06:14:04.261: INFO: PersistentVolumeClaim pvc-q86zs found and phase=Bound (14.77595336s)
Jul  6 06:14:04.261: INFO: Waiting up to 3m0s for PersistentVolume local-ff4b8 to have phase Bound
Jul  6 06:14:04.357: INFO: PersistentVolume local-ff4b8 found and phase=Bound (96.239734ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-mkb8
STEP: Creating a pod to test subpath
Jul  6 06:14:04.648: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mkb8" in namespace "provisioning-1335" to be "Succeeded or Failed"
Jul  6 06:14:04.746: INFO: Pod "pod-subpath-test-preprovisionedpv-mkb8": Phase="Pending", Reason="", readiness=false. Elapsed: 97.457412ms
Jul  6 06:14:06.851: INFO: Pod "pod-subpath-test-preprovisionedpv-mkb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20287953s
Jul  6 06:14:08.949: INFO: Pod "pod-subpath-test-preprovisionedpv-mkb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.300798963s
STEP: Saw pod success
Jul  6 06:14:08.949: INFO: Pod "pod-subpath-test-preprovisionedpv-mkb8" satisfied condition "Succeeded or Failed"
Jul  6 06:14:09.046: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-mkb8 container test-container-subpath-preprovisionedpv-mkb8: <nil>
STEP: delete the pod
Jul  6 06:14:09.247: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mkb8 to disappear
Jul  6 06:14:09.343: INFO: Pod pod-subpath-test-preprovisionedpv-mkb8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mkb8
Jul  6 06:14:09.343: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mkb8" in namespace "provisioning-1335"
STEP: Creating pod pod-subpath-test-preprovisionedpv-mkb8
STEP: Creating a pod to test subpath
Jul  6 06:14:09.540: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-mkb8" in namespace "provisioning-1335" to be "Succeeded or Failed"
Jul  6 06:14:09.636: INFO: Pod "pod-subpath-test-preprovisionedpv-mkb8": Phase="Pending", Reason="", readiness=false. Elapsed: 96.628525ms
Jul  6 06:14:11.734: INFO: Pod "pod-subpath-test-preprovisionedpv-mkb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194387281s
STEP: Saw pod success
Jul  6 06:14:11.734: INFO: Pod "pod-subpath-test-preprovisionedpv-mkb8" satisfied condition "Succeeded or Failed"
Jul  6 06:14:11.831: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-mkb8 container test-container-subpath-preprovisionedpv-mkb8: <nil>
STEP: delete the pod
Jul  6 06:14:12.031: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-mkb8 to disappear
Jul  6 06:14:12.127: INFO: Pod pod-subpath-test-preprovisionedpv-mkb8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-mkb8
Jul  6 06:14:12.127: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-mkb8" in namespace "provisioning-1335"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":5,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":5,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:14:10.797: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Jul  6 06:14:16.728: INFO: PersistentVolumeClaim pvc-c2vv2 found but phase is Pending instead of Bound.
Jul  6 06:14:18.826: INFO: PersistentVolumeClaim pvc-c2vv2 found and phase=Bound (4.292557648s)
Jul  6 06:14:18.826: INFO: Waiting up to 3m0s for PersistentVolume local-jnq7m to have phase Bound
Jul  6 06:14:18.924: INFO: PersistentVolume local-jnq7m found and phase=Bound (98.305621ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-j7jw
STEP: Creating a pod to test subpath
Jul  6 06:14:19.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-j7jw" in namespace "provisioning-7544" to be "Succeeded or Failed"
Jul  6 06:14:19.317: INFO: Pod "pod-subpath-test-preprovisionedpv-j7jw": Phase="Pending", Reason="", readiness=false. Elapsed: 97.517371ms
Jul  6 06:14:21.416: INFO: Pod "pod-subpath-test-preprovisionedpv-j7jw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195590094s
Jul  6 06:14:23.515: INFO: Pod "pod-subpath-test-preprovisionedpv-j7jw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.294604108s
STEP: Saw pod success
Jul  6 06:14:23.515: INFO: Pod "pod-subpath-test-preprovisionedpv-j7jw" satisfied condition "Succeeded or Failed"
Jul  6 06:14:23.612: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-j7jw container test-container-volume-preprovisionedpv-j7jw: <nil>
STEP: delete the pod
Jul  6 06:14:23.816: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-j7jw to disappear
Jul  6 06:14:23.913: INFO: Pod pod-subpath-test-preprovisionedpv-j7jw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-j7jw
Jul  6 06:14:23.913: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-j7jw" in namespace "provisioning-7544"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:25.299: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":3,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:13:49.243: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Jul  6 06:14:01.570: INFO: PersistentVolumeClaim pvc-mrsvd found but phase is Pending instead of Bound.
Jul  6 06:14:03.672: INFO: PersistentVolumeClaim pvc-mrsvd found and phase=Bound (10.651207538s)
Jul  6 06:14:03.672: INFO: Waiting up to 3m0s for PersistentVolume local-hvqjk to have phase Bound
Jul  6 06:14:03.772: INFO: PersistentVolume local-hvqjk found and phase=Bound (99.653263ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-c9zr
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 06:14:04.075: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-c9zr" in namespace "provisioning-4885" to be "Succeeded or Failed"
Jul  6 06:14:04.172: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Pending", Reason="", readiness=false. Elapsed: 96.98883ms
Jul  6 06:14:06.270: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195522036s
Jul  6 06:14:08.368: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 4.293015577s
Jul  6 06:14:10.466: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 6.391604059s
Jul  6 06:14:12.565: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 8.489888309s
Jul  6 06:14:14.662: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 10.587601774s
Jul  6 06:14:16.761: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 12.685684619s
Jul  6 06:14:18.860: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 14.784731806s
Jul  6 06:14:20.958: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 16.882956029s
Jul  6 06:14:23.057: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 18.981661531s
Jul  6 06:14:25.155: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Running", Reason="", readiness=true. Elapsed: 21.080278113s
Jul  6 06:14:27.253: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.178581944s
STEP: Saw pod success
Jul  6 06:14:27.254: INFO: Pod "pod-subpath-test-preprovisionedpv-c9zr" satisfied condition "Succeeded or Failed"
Jul  6 06:14:27.351: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-c9zr container test-container-subpath-preprovisionedpv-c9zr: <nil>
STEP: delete the pod
Jul  6 06:14:27.551: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-c9zr to disappear
Jul  6 06:14:27.648: INFO: Pod pod-subpath-test-preprovisionedpv-c9zr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-c9zr
Jul  6 06:14:27.648: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-c9zr" in namespace "provisioning-4885"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  6 06:14:08.262: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 06:14:08.359: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-8r8b
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 06:14:08.459: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-8r8b" in namespace "provisioning-9276" to be "Succeeded or Failed"
Jul  6 06:14:08.555: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Pending", Reason="", readiness=false. Elapsed: 96.585504ms
Jul  6 06:14:10.653: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194072237s
Jul  6 06:14:12.750: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 4.290883862s
Jul  6 06:14:14.848: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 6.389226834s
Jul  6 06:14:16.945: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 8.486417402s
Jul  6 06:14:19.044: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 10.585217018s
Jul  6 06:14:21.141: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 12.682176016s
Jul  6 06:14:23.239: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 14.780465024s
Jul  6 06:14:25.337: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 16.878549442s
Jul  6 06:14:27.435: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 18.975875828s
Jul  6 06:14:29.532: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Running", Reason="", readiness=true. Elapsed: 21.073406954s
Jul  6 06:14:31.629: INFO: Pod "pod-subpath-test-inlinevolume-8r8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.170626462s
STEP: Saw pod success
Jul  6 06:14:31.630: INFO: Pod "pod-subpath-test-inlinevolume-8r8b" satisfied condition "Succeeded or Failed"
Jul  6 06:14:31.726: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-8r8b container test-container-subpath-inlinevolume-8r8b: <nil>
STEP: delete the pod
Jul  6 06:14:31.926: INFO: Waiting for pod pod-subpath-test-inlinevolume-8r8b to disappear
Jul  6 06:14:32.023: INFO: Pod pod-subpath-test-inlinevolume-8r8b no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-8r8b
Jul  6 06:14:32.023: INFO: Deleting pod "pod-subpath-test-inlinevolume-8r8b" in namespace "provisioning-9276"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":4,"skipped":67,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:32.430: INFO: Only supported for providers [openstack] (not aws)
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:14:33.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6639" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":5,"skipped":76,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:33.629: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 125 lines ...
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8147 to expose endpoints map[pod1:[100] pod2:[101]]
Jul  6 06:12:10.489: INFO: successfully validated that service multi-endpoint-test in namespace services-8147 exposes endpoints map[pod1:[100] pod2:[101]]
STEP: Checking if the Service forwards traffic to pods
Jul  6 06:12:10.489: INFO: Creating new exec pod
Jul  6 06:12:19.780: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:12:25.831: INFO: rc: 1
Jul  6 06:12:25.831: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:12:26.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:12:32.866: INFO: rc: 1
Jul  6 06:12:32.866: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 multi-endpoint-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:12:33.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:12:36.889: INFO: rc: 1
Jul  6 06:12:36.890: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:12:37.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:12:40.858: INFO: rc: 1
Jul  6 06:12:40.858: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:12:41.832: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:12:44.894: INFO: rc: 1
Jul  6 06:12:44.894: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:12:45.832: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:12:51.876: INFO: rc: 1
Jul  6 06:12:51.877: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:12:52.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:12:58.871: INFO: rc: 1
Jul  6 06:12:58.871: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 multi-endpoint-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:12:59.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:02.872: INFO: rc: 1
Jul  6 06:13:02.872: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:03.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:09.903: INFO: rc: 1
Jul  6 06:13:09.904: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 multi-endpoint-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:10.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:16.857: INFO: rc: 1
Jul  6 06:13:16.857: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:17.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:23.856: INFO: rc: 1
Jul  6 06:13:23.856: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:24.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:30.858: INFO: rc: 1
Jul  6 06:13:30.858: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:31.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:34.866: INFO: rc: 1
Jul  6 06:13:34.866: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:35.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:41.901: INFO: rc: 1
Jul  6 06:13:41.902: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:42.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:45.886: INFO: rc: 1
Jul  6 06:13:45.886: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:46.832: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:52.867: INFO: rc: 1
Jul  6 06:13:52.867: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:53.832: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:13:59.926: INFO: rc: 1
Jul  6 06:13:59.926: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + echonc hostName
 -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:00.832: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:14:06.960: INFO: rc: 1
Jul  6 06:14:06.960: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:07.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:14:10.916: INFO: rc: 1
Jul  6 06:14:10.916: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:11.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:14:17.845: INFO: rc: 1
Jul  6 06:14:17.845: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:18.831: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:14:24.854: INFO: rc: 1
Jul  6 06:14:24.854: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:25.832: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:14:31.859: INFO: rc: 1
Jul  6 06:14:31.860: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:31.860: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jul  6 06:14:37.884: INFO: rc: 1
Jul  6 06:14:37.884: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8147 exec execpodpj5pr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:37.884: FAIL: Unexpected error:
    <*errors.errorString | 0xc0038b4300>: {
        s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol
occurred

... skipping 21 lines ...
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:12:08 +0000 UTC - event for pod2: {kubelet ip-172-20-59-118.eu-west-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:12:08 +0000 UTC - event for pod2: {kubelet ip-172-20-59-118.eu-west-2.compute.internal} Started: Started container agnhost-container
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:12:10 +0000 UTC - event for execpodpj5pr: {default-scheduler } Scheduled: Successfully assigned services-8147/execpodpj5pr to ip-172-20-56-54.eu-west-2.compute.internal
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:12:12 +0000 UTC - event for execpodpj5pr: {kubelet ip-172-20-56-54.eu-west-2.compute.internal} Started: Started container agnhost-container
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:12:12 +0000 UTC - event for execpodpj5pr: {kubelet ip-172-20-56-54.eu-west-2.compute.internal} Created: Created container agnhost-container
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:12:12 +0000 UTC - event for execpodpj5pr: {kubelet ip-172-20-56-54.eu-west-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:14:38 +0000 UTC - event for multi-endpoint-test: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint services-8147/multi-endpoint-test: Operation cannot be fulfilled on endpoints "multi-endpoint-test": the object has been modified; please apply your changes to the latest version and try again
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:14:38 +0000 UTC - event for pod1: {kubelet ip-172-20-36-135.eu-west-2.compute.internal} Killing: Stopping container agnhost-container
Jul  6 06:14:38.540: INFO: At 2021-07-06 06:14:38 +0000 UTC - event for pod2: {kubelet ip-172-20-59-118.eu-west-2.compute.internal} Killing: Stopping container agnhost-container
Jul  6 06:14:38.635: INFO: POD           NODE                                        PHASE    GRACE  CONDITIONS
Jul  6 06:14:38.635: INFO: execpodpj5pr  ip-172-20-56-54.eu-west-2.compute.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:12:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:12:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:12:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:12:10 +0000 UTC  }]
Jul  6 06:14:38.636: INFO: 
Jul  6 06:14:38.828: INFO: 
... skipping 184 lines ...
• Failure [158.461 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:14:37.884: Unexpected error:
      <*errors.errorString | 0xc0038b4300>: {
          s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:910
------------------------------
{"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":1,"skipped":5,"failed":1,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:42.474: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
• [SLOW TEST:77.833 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:14:47.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1398" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should validate Statefulset Status endpoints
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:957
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints","total":-1,"completed":5,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:52.685: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6539-crds.webhook.example.com via the AdmissionRegistration API
Jul  6 06:14:04.360: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:14:14.657: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:14:24.859: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:14:35.057: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:14:45.255: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:14:45.255: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 424 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:14:45.255: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":4,"skipped":23,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:53.543: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 194 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:14:55.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5079" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":5,"skipped":49,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:56.045: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 21 lines ...
Jul  6 06:14:52.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  6 06:14:53.284: INFO: Waiting up to 5m0s for pod "pod-65ce29ab-70ed-4c6f-ba03-5bb8dea74d42" in namespace "emptydir-4584" to be "Succeeded or Failed"
Jul  6 06:14:53.381: INFO: Pod "pod-65ce29ab-70ed-4c6f-ba03-5bb8dea74d42": Phase="Pending", Reason="", readiness=false. Elapsed: 97.019978ms
Jul  6 06:14:55.480: INFO: Pod "pod-65ce29ab-70ed-4c6f-ba03-5bb8dea74d42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195677288s
STEP: Saw pod success
Jul  6 06:14:55.480: INFO: Pod "pod-65ce29ab-70ed-4c6f-ba03-5bb8dea74d42" satisfied condition "Succeeded or Failed"
Jul  6 06:14:55.577: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-65ce29ab-70ed-4c6f-ba03-5bb8dea74d42 container test-container: <nil>
STEP: delete the pod
Jul  6 06:14:55.778: INFO: Waiting for pod pod-65ce29ab-70ed-4c6f-ba03-5bb8dea74d42 to disappear
Jul  6 06:14:55.876: INFO: Pod pod-65ce29ab-70ed-4c6f-ba03-5bb8dea74d42 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:14:55.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4584" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":6,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-instrumentation] Events API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:14:57.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1777" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":6,"skipped":52,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:14:57.352: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support subPath [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:93
STEP: Creating a pod to test hostPath subPath
Jul  6 06:14:56.718: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-497" to be "Succeeded or Failed"
Jul  6 06:14:56.815: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 97.209796ms
Jul  6 06:14:58.914: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195462372s
STEP: Saw pod success
Jul  6 06:14:58.914: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  6 06:14:59.011: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jul  6 06:14:59.214: INFO: Waiting for pod pod-host-path-test to disappear
Jul  6 06:14:59.311: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:14:59.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-497" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support subPath [NodeConformance]","total":-1,"completed":7,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:14:57.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-6266/configmap-test-dca221b4-0117-4e43-85fa-87b472b51ff8
STEP: Creating a pod to test consume configMaps
Jul  6 06:14:58.062: INFO: Waiting up to 5m0s for pod "pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e" in namespace "configmap-6266" to be "Succeeded or Failed"
Jul  6 06:14:58.160: INFO: Pod "pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e": Phase="Pending", Reason="", readiness=false. Elapsed: 97.68931ms
Jul  6 06:15:00.258: INFO: Pod "pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195423455s
Jul  6 06:15:02.356: INFO: Pod "pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293580016s
STEP: Saw pod success
Jul  6 06:15:02.356: INFO: Pod "pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e" satisfied condition "Succeeded or Failed"
Jul  6 06:15:02.454: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e container env-test: <nil>
STEP: delete the pod
Jul  6 06:15:02.655: INFO: Waiting for pod pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e to disappear
Jul  6 06:15:02.753: INFO: Pod pod-configmaps-2191313f-acac-4aef-9da4-2411c045fc2e no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.575 seconds]
[sig-node] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":58,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:15:02.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jul  6 06:15:03.555: INFO: Waiting up to 5m0s for pod "security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef" in namespace "security-context-133" to be "Succeeded or Failed"
Jul  6 06:15:03.652: INFO: Pod "security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef": Phase="Pending", Reason="", readiness=false. Elapsed: 97.279844ms
Jul  6 06:15:05.750: INFO: Pod "security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef": Phase="Running", Reason="", readiness=true. Elapsed: 2.195793172s
Jul  6 06:15:07.851: INFO: Pod "security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.296016657s
STEP: Saw pod success
Jul  6 06:15:07.851: INFO: Pod "security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef" satisfied condition "Succeeded or Failed"
Jul  6 06:15:07.948: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef container test-container: <nil>
STEP: delete the pod
Jul  6 06:15:08.165: INFO: Waiting for pod security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef to disappear
Jul  6 06:15:08.262: INFO: Pod security-context-830d5eea-958b-4ac7-ab51-b7e92c87d0ef no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.493 seconds]
[sig-node] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":8,"skipped":59,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  6 06:15:09.070: INFO: Waiting up to 5m0s for pod "metadata-volume-96fe1581-76bc-448b-9663-66745b80b74a" in namespace "projected-7307" to be "Succeeded or Failed"
Jul  6 06:15:09.168: INFO: Pod "metadata-volume-96fe1581-76bc-448b-9663-66745b80b74a": Phase="Pending", Reason="", readiness=false. Elapsed: 97.814544ms
Jul  6 06:15:11.266: INFO: Pod "metadata-volume-96fe1581-76bc-448b-9663-66745b80b74a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.196260859s
STEP: Saw pod success
Jul  6 06:15:11.266: INFO: Pod "metadata-volume-96fe1581-76bc-448b-9663-66745b80b74a" satisfied condition "Succeeded or Failed"
Jul  6 06:15:11.364: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod metadata-volume-96fe1581-76bc-448b-9663-66745b80b74a container client-container: <nil>
STEP: delete the pod
Jul  6 06:15:11.571: INFO: Waiting for pod metadata-volume-96fe1581-76bc-448b-9663-66745b80b74a to disappear
Jul  6 06:15:11.668: INFO: Pod metadata-volume-96fe1581-76bc-448b-9663-66745b80b74a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:15:11.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7307" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":9,"skipped":62,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:11.874: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 135 lines ...
• [SLOW TEST:63.086 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237
------------------------------
{"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:13.134: INFO: Only supported for providers [gce gke] (not aws)
... skipping 36 lines ...
STEP: Destroying namespace "services-6993" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":5,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:14.366: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
Jul  6 06:14:48.747: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-q2svw] to have phase Bound
Jul  6 06:14:48.843: INFO: PersistentVolumeClaim pvc-q2svw found and phase=Bound (96.079012ms)
STEP: Deleting the previously created pod
Jul  6 06:14:53.323: INFO: Deleting pod "pvc-volume-tester-vpkbt" in namespace "csi-mock-volumes-9692"
Jul  6 06:14:53.421: INFO: Wait up to 5m0s for pod "pvc-volume-tester-vpkbt" to be fully deleted
STEP: Checking CSI driver logs
Jul  6 06:14:59.711: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/0bba420b-a3ee-4bbb-a4ff-5384fb532758/volumes/kubernetes.io~csi/pvc-461caac1-14d5-461c-88bb-3d7b0792cd9a/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-vpkbt
Jul  6 06:14:59.711: INFO: Deleting pod "pvc-volume-tester-vpkbt" in namespace "csi-mock-volumes-9692"
STEP: Deleting claim pvc-q2svw
Jul  6 06:14:59.999: INFO: Waiting up to 2m0s for PersistentVolume pvc-461caac1-14d5-461c-88bb-3d7b0792cd9a to get deleted
Jul  6 06:15:00.095: INFO: PersistentVolume pvc-461caac1-14d5-461c-88bb-3d7b0792cd9a was removed
STEP: Deleting storageclass csi-mock-volumes-9692-scs6snz
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should not be plumbed down when CSIDriver is not deployed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed","total":-1,"completed":6,"skipped":85,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:15:25.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4849" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":7,"skipped":86,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:25.873: INFO: Driver aws doesn't support ext3 -- skipping
... skipping 37 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":29,"failed":0}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:12:59.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 5 lines ...
STEP: creating replication controller nodeport-test in namespace services-6731
I0706 06:12:59.959586   12527 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6731, replica count: 2
I0706 06:13:03.110499   12527 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  6 06:13:03.110: INFO: Creating new exec pod
Jul  6 06:13:06.501: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:13:12.564: INFO: rc: 1
Jul  6 06:13:12.564: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:13.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:13:19.596: INFO: rc: 1
Jul  6 06:13:19.596: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + echo hostName
nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:20.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:13:26.593: INFO: rc: 1
Jul  6 06:13:26.593: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:27.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:13:33.630: INFO: rc: 1
Jul  6 06:13:33.630: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:34.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:13:40.603: INFO: rc: 1
Jul  6 06:13:40.603: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:41.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:13:47.605: INFO: rc: 1
Jul  6 06:13:47.605: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:48.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:13:54.627: INFO: rc: 1
Jul  6 06:13:54.627: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + ncecho -v hostName -t
 -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:13:55.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:01.591: INFO: rc: 1
Jul  6 06:14:01.591: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:02.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:08.617: INFO: rc: 1
Jul  6 06:14:08.618: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:09.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:15.587: INFO: rc: 1
Jul  6 06:14:15.587: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:16.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:22.575: INFO: rc: 1
Jul  6 06:14:22.575: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:23.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:29.585: INFO: rc: 1
Jul  6 06:14:29.585: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:30.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:36.597: INFO: rc: 1
Jul  6 06:14:36.597: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:37.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:43.587: INFO: rc: 1
Jul  6 06:14:43.587: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:44.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:50.635: INFO: rc: 1
Jul  6 06:14:50.635: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ nc -v -t -w 2 nodeport-test 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:51.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:14:57.691: INFO: rc: 1
Jul  6 06:14:57.691: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:14:58.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:15:04.606: INFO: rc: 1
Jul  6 06:15:04.606: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:15:05.565: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:15:11.590: INFO: rc: 1
Jul  6 06:15:11.590: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:15:12.564: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:15:18.604: INFO: rc: 1
Jul  6 06:15:18.604: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:15:18.604: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jul  6 06:15:24.660: INFO: rc: 1
Jul  6 06:15:24.660: INFO: Service reachability failing with error: error running /tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6731 exec execpodptgmn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 nodeport-test 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  6 06:15:24.660: FAIL: Unexpected error:
    <*errors.errorString | 0xc000311fa0>: {
        s: "service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol
occurred

... skipping 231 lines ...
• Failure [149.808 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:15:24.660: Unexpected error:
      <*errors.errorString | 0xc000311fa0>: {
          s: "service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1187
------------------------------
{"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":2,"skipped":29,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:18.064 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":8,"skipped":95,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:43.972: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 93 lines ...
Jul  6 06:15:45.677: INFO: AfterEach: Cleaning up test resources.
Jul  6 06:15:45.677: INFO: Deleting PersistentVolumeClaim "pvc-zb76s"
Jul  6 06:15:45.776: INFO: Deleting PersistentVolume "hostpath-8qjnr"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":9,"skipped":101,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:45.888: INFO: Only supported for providers [gce gke] (not aws)
... skipping 154 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":8,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:47.552: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 06:15:48.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ff7fec4-e594-43a5-94bf-d901a8c4db29" in namespace "downward-api-6401" to be "Succeeded or Failed"
Jul  6 06:15:48.244: INFO: Pod "downwardapi-volume-1ff7fec4-e594-43a5-94bf-d901a8c4db29": Phase="Pending", Reason="", readiness=false. Elapsed: 95.356935ms
Jul  6 06:15:50.340: INFO: Pod "downwardapi-volume-1ff7fec4-e594-43a5-94bf-d901a8c4db29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.191526779s
STEP: Saw pod success
Jul  6 06:15:50.340: INFO: Pod "downwardapi-volume-1ff7fec4-e594-43a5-94bf-d901a8c4db29" satisfied condition "Succeeded or Failed"
Jul  6 06:15:50.436: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod downwardapi-volume-1ff7fec4-e594-43a5-94bf-d901a8c4db29 container client-container: <nil>
STEP: delete the pod
Jul  6 06:15:50.638: INFO: Waiting for pod downwardapi-volume-1ff7fec4-e594-43a5-94bf-d901a8c4db29 to disappear
Jul  6 06:15:50.736: INFO: Pod downwardapi-volume-1ff7fec4-e594-43a5-94bf-d901a8c4db29 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 24 lines ...
• [SLOW TEST:8.106 seconds]
[sig-api-machinery] ServerSideApply
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should work for CRDs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:569
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should work for CRDs","total":-1,"completed":10,"skipped":111,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:15:54.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  6 06:15:54.679: INFO: Waiting up to 5m0s for pod "pod-61e06422-8234-4753-9e57-c2a382e0c0d1" in namespace "emptydir-2261" to be "Succeeded or Failed"
Jul  6 06:15:54.776: INFO: Pod "pod-61e06422-8234-4753-9e57-c2a382e0c0d1": Phase="Pending", Reason="", readiness=false. Elapsed: 97.422272ms
Jul  6 06:15:56.875: INFO: Pod "pod-61e06422-8234-4753-9e57-c2a382e0c0d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195857836s
STEP: Saw pod success
Jul  6 06:15:56.875: INFO: Pod "pod-61e06422-8234-4753-9e57-c2a382e0c0d1" satisfied condition "Succeeded or Failed"
Jul  6 06:15:56.973: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-61e06422-8234-4753-9e57-c2a382e0c0d1 container test-container: <nil>
STEP: delete the pod
Jul  6 06:15:57.176: INFO: Waiting for pod pod-61e06422-8234-4753-9e57-c2a382e0c0d1 to disappear
Jul  6 06:15:57.273: INFO: Pod pod-61e06422-8234-4753-9e57-c2a382e0c0d1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:15:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2261" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":124,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:15:57.496: INFO: Only supported for providers [gce gke] (not aws)
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":12,"skipped":130,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:16:12.202: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 130 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should not modify fsGroup if fsGroupPolicy=None
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None","total":-1,"completed":8,"skipped":20,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":4,"skipped":45,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:15:50.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":5,"skipped":45,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:16:12.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
STEP: Looking for a node to schedule job pod
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:16:21.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-6622" for this suite.


• [SLOW TEST:8.987 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are not locally restarted
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:233
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:16:16.919: INFO: >>> kubeConfig: /root/.kube/config
... skipping 18 lines ...
• [SLOW TEST:18.478 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":9,"skipped":21,"failed":0}

SSSSSSS
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted","total":-1,"completed":13,"skipped":135,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:16:21.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
• [SLOW TEST:14.905 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":14,"skipped":135,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:16:36.144: INFO: Only supported for providers [vsphere] (not aws)
... skipping 131 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:16:39.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-2501" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":15,"skipped":142,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:16:39.407: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:16:40.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2616" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":16,"skipped":153,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:16:40.527: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":10,"skipped":28,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:64.081 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted by liveness probe after startup probe enables it
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:377
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":7,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 13 lines ...
STEP: creating a claim
Jul  6 06:11:51.128: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: starting aws-injector
STEP: Deleting pod aws-injector in namespace volume-8522
Jul  6 06:16:51.858: INFO: Waiting for pod aws-injector to disappear
Jul  6 06:16:51.955: INFO: Pod aws-injector no longer exists
Jul  6 06:16:51.956: FAIL: Failed to create injector pod: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186 +0x3ff
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000583b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-8522".
STEP: Found 6 events.
Jul  6 06:16:52.349: INFO: At 2021-07-06 06:11:51 +0000 UTC - event for awsvxtl5: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  6 06:16:52.349: INFO: At 2021-07-06 06:11:51 +0000 UTC - event for awsvxtl5: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  6 06:16:52.349: INFO: At 2021-07-06 06:11:51 +0000 UTC - event for awsvxtl5: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } Provisioning: External provisioner is provisioning volume for claim "volume-8522/awsvxtl5"
Jul  6 06:16:52.349: INFO: At 2021-07-06 06:12:01 +0000 UTC - event for awsvxtl5: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningFailed: failed to provision volume with StorageClass "volume-8522sxf9m": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  6 06:16:52.349: INFO: At 2021-07-06 06:12:12 +0000 UTC - event for awsvxtl5: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningFailed: failed to provision volume with StorageClass "volume-8522sxf9m": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  6 06:16:52.349: INFO: At 2021-07-06 06:15:18 +0000 UTC - event for awsvxtl5: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningFailed: failed to provision volume with StorageClass "volume-8522sxf9m": rpc error: code = Internal desc = Could not create volume "pvc-98c1dd97-7797-41b2-8927-dbe84ec3e146": failed to get an available volume in EC2: RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  6 06:16:52.447: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  6 06:16:52.447: INFO: 
Jul  6 06:16:52.641: INFO: 
Logging node info for node ip-172-20-32-57.eu-west-2.compute.internal
Jul  6 06:16:52.739: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-57.eu-west-2.compute.internal    69f850ad-0e3c-45b2-8481-0c592e1b2544 9169 0 2021-07-06 06:08:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kops.k8s.io/instancegroup:nodes-eu-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-57.eu-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:ip-172-20-32-57.eu-west-2.compute.internal topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-04b2469cf8d928a72"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-06 06:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2021-07-06 06:14:54 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-06 06:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-04b2469cf8d928a72,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:09:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.57,},NodeAddress{Type:ExternalIP,Address:35.176.18.223,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-57.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-32-57.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-176-18-223.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec224b3f3013408803992ba3241c2065,SystemUUID:ec224b3f-3013-4088-0399-2ba3241c2065,BootID:78da66d3-86e3-4ca8-a949-169540ab78f8,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-00466c4f634295ffd kubernetes.io/csi/ebs.csi.aws.com^vol-0fd5e1e7e47f2a1ab],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00466c4f634295ffd,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-014bcceea6af4be4d,DevicePath:,},},Config:nil,},}
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Jul  6 06:16:51.956: Failed to create injector pod: timed out waiting for the condition

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":0,"skipped":0,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data"]}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 98 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should not expand volume if resizingOnDriver=off, resizingOnSC=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on","total":-1,"completed":7,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:16:56.744: INFO: Driver local doesn't support ext4 -- skipping
... skipping 54 lines ...
Jul  6 06:11:52.571: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  6 06:11:52.571: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  6 06:11:52.571: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-2607vgc8h      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-2607    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-2607vgc8h,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-2607    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-2607vgc8h,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-2607    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-2607vgc8h,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-2607vgc8h    2d26d245-2c89-4800-ba5c-ff7edd084f00 1794 0 2021-07-06 06:11:52 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-07-06 06:11:52 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-6q2hl pvc- provisioning-2607  3f57d92c-2632-4f75-a33d-71c0255e9e52 1816 0 2021-07-06 06:11:53 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-07-06 06:11:53 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-2607vgc8h,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
Jul  6 06:16:53.451: FAIL: Unexpected error:
    <*errors.errorString | 0xc003c2ad70>: {
        s: "pod \"pod-f26aa396-c151-458c-8481-0151727657f7\" is not Running: timed out waiting for the condition",
    }
    pod "pod-f26aa396-c151-458c-8481-0151727657f7" is not Running: timed out waiting for the condition
occurred

... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-2607".
STEP: Found 8 events.
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:11:53 +0000 UTC - event for pvc-6q2hl: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:11:53 +0000 UTC - event for pvc-6q2hl: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } Provisioning: External provisioner is provisioning volume for claim "provisioning-2607/pvc-6q2hl"
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:11:53 +0000 UTC - event for pvc-6q2hl: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:12:03 +0000 UTC - event for pvc-6q2hl: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-2607vgc8h": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:12:14 +0000 UTC - event for pvc-6q2hl: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-2607vgc8h": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:15:10 +0000 UTC - event for pvc-6q2hl: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningSucceeded: Successfully provisioned volume pvc-3f57d92c-2632-4f75-a33d-71c0255e9e52
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:15:11 +0000 UTC - event for pod-f26aa396-c151-458c-8481-0151727657f7: {default-scheduler } Scheduled: Successfully assigned provisioning-2607/pod-f26aa396-c151-458c-8481-0151727657f7 to ip-172-20-36-135.eu-west-2.compute.internal
Jul  6 06:16:53.746: INFO: At 2021-07-06 06:15:26 +0000 UTC - event for pod-f26aa396-c151-458c-8481-0151727657f7: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-3f57d92c-2632-4f75-a33d-71c0255e9e52" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  6 06:16:53.845: INFO: POD                                       NODE                                         PHASE    GRACE  CONDITIONS
Jul  6 06:16:53.845: INFO: pod-f26aa396-c151-458c-8481-0151727657f7  ip-172-20-36-135.eu-west-2.compute.internal  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:15:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:15:11 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:15:11 +0000 UTC ContainersNotReady containers with unready status: [write-pod]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:15:11 +0000 UTC  }]
Jul  6 06:16:53.845: INFO: 
Jul  6 06:16:54.038: INFO: 
Logging node info for node ip-172-20-32-57.eu-west-2.compute.internal
Jul  6 06:16:54.136: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-57.eu-west-2.compute.internal    69f850ad-0e3c-45b2-8481-0c592e1b2544 9169 0 2021-07-06 06:08:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kops.k8s.io/instancegroup:nodes-eu-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-57.eu-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:ip-172-20-32-57.eu-west-2.compute.internal topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-04b2469cf8d928a72"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-06 06:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2021-07-06 06:14:54 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-06 06:15:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-04b2469cf8d928a72,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-06 06:16:46 +0000 UTC,LastTransitionTime:2021-07-06 06:09:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.57,},NodeAddress{Type:ExternalIP,Address:35.176.18.223,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-57.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-32-57.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-176-18-223.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec224b3f3013408803992ba3241c2065,SystemUUID:ec224b3f-3013-4088-0399-2ba3241c2065,BootID:78da66d3-86e3-4ca8-a949-169540ab78f8,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-00466c4f634295ffd kubernetes.io/csi/ebs.csi.aws.com^vol-0fd5e1e7e47f2a1ab],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00466c4f634295ffd,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-014bcceea6af4be4d,DevicePath:,},},Config:nil,},}
... skipping 176 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180

      Jul  6 06:16:53.451: Unexpected error:
          <*errors.errorString | 0xc003c2ad70>: {
              s: "pod \"pod-f26aa396-c151-458c-8481-0151727657f7\" is not Running: timed out waiting for the condition",
          }
          pod "pod-f26aa396-c151-458c-8481-0151727657f7" is not Running: timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:418
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":0,"skipped":12,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:16:58.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-271" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":1,"skipped":14,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 47 lines ...
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: Gathering metrics
W0706 06:12:02.387593   12480 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0706 06:12:02.387671   12480 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  6 06:17:02.581: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:17:02.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4884" for this suite.


• [SLOW TEST:301.557 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:02.788: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 2 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 59 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":6,"skipped":46,"failed":0}
[BeforeEach] [sig-network] EndpointSlice
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:16:38.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename endpointslice
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
• [SLOW TEST:32.563 seconds]
[sig-network] EndpointSlice
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":7,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:11.336: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 80 lines ...
Jul  6 06:16:46.490: INFO: PersistentVolumeClaim pvc-ddsfg found but phase is Pending instead of Bound.
Jul  6 06:16:48.589: INFO: PersistentVolumeClaim pvc-ddsfg found and phase=Bound (4.296087321s)
Jul  6 06:16:48.589: INFO: Waiting up to 3m0s for PersistentVolume local-krpm8 to have phase Bound
Jul  6 06:16:48.686: INFO: PersistentVolume local-krpm8 found and phase=Bound (97.241503ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-vw4q
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 06:16:48.981: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-vw4q" in namespace "provisioning-1006" to be "Succeeded or Failed"
Jul  6 06:16:49.078: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Pending", Reason="", readiness=false. Elapsed: 97.663453ms
Jul  6 06:16:51.176: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195813576s
Jul  6 06:16:53.274: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293659074s
Jul  6 06:16:55.372: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391688898s
Jul  6 06:16:57.472: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Running", Reason="", readiness=true. Elapsed: 8.490976533s
Jul  6 06:16:59.570: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Running", Reason="", readiness=true. Elapsed: 10.589606564s
... skipping 2 lines ...
Jul  6 06:17:05.872: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Running", Reason="", readiness=true. Elapsed: 16.891421983s
Jul  6 06:17:07.972: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Running", Reason="", readiness=true. Elapsed: 18.991431844s
Jul  6 06:17:10.071: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Running", Reason="", readiness=true. Elapsed: 21.089887305s
Jul  6 06:17:12.168: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Running", Reason="", readiness=true. Elapsed: 23.187558215s
Jul  6 06:17:14.266: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.285663771s
STEP: Saw pod success
Jul  6 06:17:14.266: INFO: Pod "pod-subpath-test-preprovisionedpv-vw4q" satisfied condition "Succeeded or Failed"
Jul  6 06:17:14.364: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-vw4q container test-container-subpath-preprovisionedpv-vw4q: <nil>
STEP: delete the pod
Jul  6 06:17:14.568: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-vw4q to disappear
Jul  6 06:17:14.666: INFO: Pod pod-subpath-test-preprovisionedpv-vw4q no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-vw4q
Jul  6 06:17:14.666: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-vw4q" in namespace "provisioning-1006"
... skipping 33 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
Jul  6 06:17:11.875: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  6 06:17:11.875: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-zg8r
STEP: Creating a pod to test subpath
Jul  6 06:17:11.975: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-zg8r" in namespace "provisioning-6947" to be "Succeeded or Failed"
Jul  6 06:17:12.072: INFO: Pod "pod-subpath-test-inlinevolume-zg8r": Phase="Pending", Reason="", readiness=false. Elapsed: 97.476224ms
Jul  6 06:17:14.170: INFO: Pod "pod-subpath-test-inlinevolume-zg8r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19546047s
Jul  6 06:17:16.269: INFO: Pod "pod-subpath-test-inlinevolume-zg8r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293730073s
STEP: Saw pod success
Jul  6 06:17:16.269: INFO: Pod "pod-subpath-test-inlinevolume-zg8r" satisfied condition "Succeeded or Failed"
Jul  6 06:17:16.372: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-zg8r container test-container-subpath-inlinevolume-zg8r: <nil>
STEP: delete the pod
Jul  6 06:17:16.580: INFO: Waiting for pod pod-subpath-test-inlinevolume-zg8r to disappear
Jul  6 06:17:16.679: INFO: Pod pod-subpath-test-inlinevolume-zg8r no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-zg8r
Jul  6 06:17:16.680: INFO: Deleting pod "pod-subpath-test-inlinevolume-zg8r" in namespace "provisioning-6947"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":8,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:17.082: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:17:20.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-537" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":9,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:20.681: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364

      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:15:50.948: INFO: >>> kubeConfig: /root/.kube/config
... skipping 149 lines ...
• [SLOW TEST:25.304 seconds]
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  graceful pod terminated should wait until preStop hook completes the process
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":3,"skipped":18,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: create the rc
STEP: delete the rc
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0706 06:12:28.622193   12550 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0706 06:12:28.622273   12550 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  6 06:17:28.817: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul  6 06:17:28.817: INFO: Deleting pod "simpletest.rc-b9qbs" in namespace "gc-7151"
Jul  6 06:17:28.922: INFO: Deleting pod "simpletest.rc-knrgp" in namespace "gc-7151"
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:17:29.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7151" for this suite.
... skipping 2 lines ...
• [SLOW TEST:338.729 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:449
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":1,"skipped":19,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:17:29.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Jul  6 06:17:29.945: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-7646" to be "Succeeded or Failed"
Jul  6 06:17:30.042: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 97.877495ms
Jul  6 06:17:32.141: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.196782354s
Jul  6 06:17:32.141: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:17:32.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7646" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:32.448: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 11 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":1,"skipped":6,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:17:02.362: INFO: >>> kubeConfig: /root/.kube/config
... skipping 78 lines ...
Jul  6 06:12:36.708: INFO: Creating resource for dynamic PV
Jul  6 06:12:36.708: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass fsgroupchangepolicy-4793sjcgl
STEP: creating a claim
Jul  6 06:12:36.806: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating Pod in namespace fsgroupchangepolicy-4793 with fsgroup 1000
Jul  6 06:17:37.396: FAIL: Unexpected error:
    <*errors.errorString | 0xc003ed9830>: {
        s: "pod \"pod-29025ee3-3748-44b9-a614-378454978779\" is not Running: timed out waiting for the condition",
    }
    pod "pod-29025ee3-3748-44b9-a614-378454978779" is not Running: timed out waiting for the condition
occurred

... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "fsgroupchangepolicy-4793".
STEP: Found 5 events.
Jul  6 06:17:37.795: INFO: At 2021-07-06 06:12:36 +0000 UTC - event for aws6vq7z: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  6 06:17:37.795: INFO: At 2021-07-06 06:12:37 +0000 UTC - event for aws6vq7z: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } Provisioning: External provisioner is provisioning volume for claim "fsgroupchangepolicy-4793/aws6vq7z"
Jul  6 06:17:37.795: INFO: At 2021-07-06 06:12:37 +0000 UTC - event for aws6vq7z: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  6 06:17:37.795: INFO: At 2021-07-06 06:12:47 +0000 UTC - event for aws6vq7z: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-4793sjcgl": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  6 06:17:37.795: INFO: At 2021-07-06 06:13:10 +0000 UTC - event for aws6vq7z: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-b7qhf_5f4ed70b-d740-42c2-b054-9a6bd843abde } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-4793sjcgl": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  6 06:17:37.893: INFO: POD                                       NODE  PHASE    GRACE  CONDITIONS
Jul  6 06:17:37.894: INFO: pod-29025ee3-3748-44b9-a614-378454978779        Pending         []
Jul  6 06:17:37.894: INFO: 
Jul  6 06:17:37.992: INFO: 
Logging node info for node ip-172-20-32-57.eu-west-2.compute.internal
... skipping 180 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Jul  6 06:17:37.396: Unexpected error:
          <*errors.errorString | 0xc003ed9830>: {
              s: "pod \"pod-29025ee3-3748-44b9-a614-378454978779\" is not Running: timed out waiting for the condition",
          }
          pod "pod-29025ee3-3748-44b9-a614-378454978779" is not Running: timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:250
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":7,"skipped":46,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:41.665: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data"]}
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:17:39.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:17:43.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9667" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":3,"skipped":6,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:43.429: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 167 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-558179d8-29ec-4b43-aa30-cdfb2f49fef3
STEP: Creating a pod to test consume configMaps
Jul  6 06:17:42.394: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a" in namespace "configmap-4001" to be "Succeeded or Failed"
Jul  6 06:17:42.491: INFO: Pod "pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a": Phase="Pending", Reason="", readiness=false. Elapsed: 97.472165ms
Jul  6 06:17:44.589: INFO: Pod "pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195162421s
Jul  6 06:17:46.687: INFO: Pod "pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293148824s
STEP: Saw pod success
Jul  6 06:17:46.687: INFO: Pod "pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a" satisfied condition "Succeeded or Failed"
Jul  6 06:17:46.784: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a container agnhost-container: <nil>
STEP: delete the pod
Jul  6 06:17:46.985: INFO: Waiting for pod pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a to disappear
Jul  6 06:17:47.082: INFO: Pod pod-configmaps-a3630c4e-5a43-4288-add6-e04d5649697a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.572 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":52,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:47.301: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 74 lines ...
• [SLOW TEST:272.079 seconds]
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify "immediate" deletion of a PVC that is not in active use by a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:114
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify \"immediate\" deletion of a PVC that is not in active use by a pod","total":-1,"completed":3,"skipped":25,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:50.272: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 130 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":14,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:57.486: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:17:57.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-7299" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed","total":-1,"completed":5,"skipped":18,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:17:58.230: INFO: Only supported for providers [vsphere] (not aws)
... skipping 23 lines ...
Jul  6 06:17:58.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  6 06:17:58.854: INFO: Waiting up to 5m0s for pod "pod-920911e7-e90e-49a1-895a-7712c10824a5" in namespace "emptydir-4738" to be "Succeeded or Failed"
Jul  6 06:17:58.951: INFO: Pod "pod-920911e7-e90e-49a1-895a-7712c10824a5": Phase="Pending", Reason="", readiness=false. Elapsed: 97.511403ms
Jul  6 06:18:01.049: INFO: Pod "pod-920911e7-e90e-49a1-895a-7712c10824a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195219712s
STEP: Saw pod success
Jul  6 06:18:01.049: INFO: Pod "pod-920911e7-e90e-49a1-895a-7712c10824a5" satisfied condition "Succeeded or Failed"
Jul  6 06:18:01.147: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-920911e7-e90e-49a1-895a-7712c10824a5 container test-container: <nil>
STEP: delete the pod
Jul  6 06:18:01.352: INFO: Waiting for pod pod-920911e7-e90e-49a1-895a-7712c10824a5 to disappear
Jul  6 06:18:01.450: INFO: Pod pod-920911e7-e90e-49a1-895a-7712c10824a5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:01.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4738" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":29,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:01.675: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 94 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through an HTTP proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:439
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy","total":-1,"completed":9,"skipped":57,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:02.601: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 180 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:221
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":8,"skipped":46,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:05.316: INFO: Only supported for providers [vsphere] (not aws)
... skipping 20 lines ...
STEP: Creating a kubernetes client
Jul  6 06:13:05.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume-provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:144
[It] should report an error and create no PV
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:734
STEP: creating a StorageClass
STEP: Creating a StorageClass
STEP: creating a claim object with a suffix for gluster dynamic provisioner
Jul  6 06:13:06.341: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  6 06:18:06.827: INFO: The test missed event about failed provisioning, but checked that no volume was provisioned for 5m0s
Jul  6 06:18:06.827: INFO: deleting claim "volume-provisioning-9614"/"pvc-xwblx"
Jul  6 06:18:06.925: INFO: deleting storage class volume-provisioning-9614-invalid-aws5swqb
[AfterEach] [sig-storage] Dynamic Provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:07.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-provisioning-9614" for this suite.


• [SLOW TEST:301.555 seconds]
[sig-storage] Dynamic Provisioning
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Invalid AWS KMS key
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:733
    should report an error and create no PV
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/volume_provisioning.go:734
------------------------------
{"msg":"PASSED [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV","total":-1,"completed":7,"skipped":36,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:07.249: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1316
    should update the label on a resource  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":8,"skipped":42,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:13.700: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 45 lines ...
Jul  6 06:17:46.390: INFO: PersistentVolumeClaim pvc-v2gx7 found but phase is Pending instead of Bound.
Jul  6 06:17:48.490: INFO: PersistentVolumeClaim pvc-v2gx7 found and phase=Bound (10.594514977s)
Jul  6 06:17:48.490: INFO: Waiting up to 3m0s for PersistentVolume local-lt7bm to have phase Bound
Jul  6 06:17:48.588: INFO: PersistentVolume local-lt7bm found and phase=Bound (97.860285ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-k9ms
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 06:17:48.885: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-k9ms" in namespace "provisioning-5683" to be "Succeeded or Failed"
Jul  6 06:17:48.983: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Pending", Reason="", readiness=false. Elapsed: 98.413793ms
Jul  6 06:17:51.083: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198256004s
Jul  6 06:17:53.183: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 4.298804871s
Jul  6 06:17:55.284: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 6.398888324s
Jul  6 06:17:57.383: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 8.498443566s
Jul  6 06:17:59.482: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 10.597581453s
Jul  6 06:18:01.581: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 12.696719504s
Jul  6 06:18:03.684: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 14.799813435s
Jul  6 06:18:05.784: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 16.898821891s
Jul  6 06:18:07.882: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 18.997134946s
Jul  6 06:18:09.980: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Running", Reason="", readiness=true. Elapsed: 21.095563811s
Jul  6 06:18:12.080: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.194858423s
STEP: Saw pod success
Jul  6 06:18:12.080: INFO: Pod "pod-subpath-test-preprovisionedpv-k9ms" satisfied condition "Succeeded or Failed"
Jul  6 06:18:12.178: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-k9ms container test-container-subpath-preprovisionedpv-k9ms: <nil>
STEP: delete the pod
Jul  6 06:18:12.387: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-k9ms to disappear
Jul  6 06:18:12.485: INFO: Pod pod-subpath-test-preprovisionedpv-k9ms no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-k9ms
Jul  6 06:18:12.485: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-k9ms" in namespace "provisioning-5683"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:15.905: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should implement legacy replacement when the update strategy is OnDelete
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:503
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete","total":-1,"completed":8,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:18:18.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109
STEP: Creating a pod to test downward api env vars
Jul  6 06:18:18.685: INFO: Waiting up to 5m0s for pod "downward-api-a91ffbfa-b410-4803-918b-0036e4ec23ed" in namespace "downward-api-8665" to be "Succeeded or Failed"
Jul  6 06:18:18.781: INFO: Pod "downward-api-a91ffbfa-b410-4803-918b-0036e4ec23ed": Phase="Pending", Reason="", readiness=false. Elapsed: 95.896492ms
Jul  6 06:18:20.878: INFO: Pod "downward-api-a91ffbfa-b410-4803-918b-0036e4ec23ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.192501914s
STEP: Saw pod success
Jul  6 06:18:20.878: INFO: Pod "downward-api-a91ffbfa-b410-4803-918b-0036e4ec23ed" satisfied condition "Succeeded or Failed"
Jul  6 06:18:20.974: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod downward-api-a91ffbfa-b410-4803-918b-0036e4ec23ed container dapi-container: <nil>
STEP: delete the pod
Jul  6 06:18:21.171: INFO: Waiting for pod downward-api-a91ffbfa-b410-4803-918b-0036e4ec23ed to disappear
Jul  6 06:18:21.266: INFO: Pod downward-api-a91ffbfa-b410-4803-918b-0036e4ec23ed no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:21.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8665" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":9,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:21.478: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:22.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-327" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":10,"skipped":37,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:22.377: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 139 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":8,"skipped":94,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:23.517: INFO: Only supported for providers [azure] (not aws)
... skipping 112 lines ...
Jul  6 06:18:17.029: INFO: PersistentVolumeClaim pvc-pjh6f found but phase is Pending instead of Bound.
Jul  6 06:18:19.127: INFO: PersistentVolumeClaim pvc-pjh6f found and phase=Bound (12.686459451s)
Jul  6 06:18:19.128: INFO: Waiting up to 3m0s for PersistentVolume local-pw5ft to have phase Bound
Jul  6 06:18:19.225: INFO: PersistentVolume local-pw5ft found and phase=Bound (97.925547ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9cjc
STEP: Creating a pod to test subpath
Jul  6 06:18:19.524: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9cjc" in namespace "provisioning-4591" to be "Succeeded or Failed"
Jul  6 06:18:19.621: INFO: Pod "pod-subpath-test-preprovisionedpv-9cjc": Phase="Pending", Reason="", readiness=false. Elapsed: 97.090273ms
Jul  6 06:18:21.719: INFO: Pod "pod-subpath-test-preprovisionedpv-9cjc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195122136s
Jul  6 06:18:23.818: INFO: Pod "pod-subpath-test-preprovisionedpv-9cjc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.293690954s
STEP: Saw pod success
Jul  6 06:18:23.818: INFO: Pod "pod-subpath-test-preprovisionedpv-9cjc" satisfied condition "Succeeded or Failed"
Jul  6 06:18:23.916: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-9cjc container test-container-subpath-preprovisionedpv-9cjc: <nil>
STEP: delete the pod
Jul  6 06:18:24.124: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9cjc to disappear
Jul  6 06:18:24.224: INFO: Pod pod-subpath-test-preprovisionedpv-9cjc no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9cjc
Jul  6 06:18:24.224: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9cjc" in namespace "provisioning-4591"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":10,"skipped":74,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents"]}

S
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":3,"skipped":32,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:17:45.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : projected
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected","total":-1,"completed":4,"skipped":32,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:26.807: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 128 lines ...
STEP: Registering the crd webhook via the AdmissionRegistration API
Jul  6 06:17:36.901: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:17:47.199: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:17:57.401: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:18:07.603: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:18:17.804: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:18:17.804: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 405 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:18:17.804: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":9,"skipped":58,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:27.648: INFO: Driver "csi-hostpath" does not support FsGroup - skipping
... skipping 116 lines ...
[BeforeEach] Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:480
[It] should not create extra sandbox if all containers are done
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:484
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Jul  6 06:18:24.200: INFO: Waiting up to 5m0s for pod "pod-always-succeed58bc89ce-b4b8-43ad-924c-27fce695ddf2" in namespace "pods-9134" to be "Succeeded or Failed"
Jul  6 06:18:24.299: INFO: Pod "pod-always-succeed58bc89ce-b4b8-43ad-924c-27fce695ddf2": Phase="Pending", Reason="", readiness=false. Elapsed: 98.62659ms
Jul  6 06:18:26.396: INFO: Pod "pod-always-succeed58bc89ce-b4b8-43ad-924c-27fce695ddf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.19588621s
STEP: Saw pod success
Jul  6 06:18:26.397: INFO: Pod "pod-always-succeed58bc89ce-b4b8-43ad-924c-27fce695ddf2" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:28.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Pod Container lifecycle
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:478
    should not create extra sandbox if all containers are done
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:484
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":9,"skipped":115,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:28.810: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 41 lines ...
Jul  6 06:18:16.180: INFO: PersistentVolumeClaim pvc-4rnpj found but phase is Pending instead of Bound.
Jul  6 06:18:18.278: INFO: PersistentVolumeClaim pvc-4rnpj found and phase=Bound (8.484914707s)
Jul  6 06:18:18.278: INFO: Waiting up to 3m0s for PersistentVolume local-7swz4 to have phase Bound
Jul  6 06:18:18.374: INFO: PersistentVolume local-7swz4 found and phase=Bound (96.352834ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-czq9
STEP: Creating a pod to test subpath
Jul  6 06:18:18.667: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-czq9" in namespace "provisioning-7532" to be "Succeeded or Failed"
Jul  6 06:18:18.763: INFO: Pod "pod-subpath-test-preprovisionedpv-czq9": Phase="Pending", Reason="", readiness=false. Elapsed: 96.533363ms
Jul  6 06:18:20.860: INFO: Pod "pod-subpath-test-preprovisionedpv-czq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193440489s
Jul  6 06:18:22.957: INFO: Pod "pod-subpath-test-preprovisionedpv-czq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.290076707s
STEP: Saw pod success
Jul  6 06:18:22.957: INFO: Pod "pod-subpath-test-preprovisionedpv-czq9" satisfied condition "Succeeded or Failed"
Jul  6 06:18:23.053: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-czq9 container test-container-subpath-preprovisionedpv-czq9: <nil>
STEP: delete the pod
Jul  6 06:18:23.262: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-czq9 to disappear
Jul  6 06:18:23.359: INFO: Pod pod-subpath-test-preprovisionedpv-czq9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-czq9
Jul  6 06:18:23.359: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-czq9" in namespace "provisioning-7532"
STEP: Creating pod pod-subpath-test-preprovisionedpv-czq9
STEP: Creating a pod to test subpath
Jul  6 06:18:23.554: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-czq9" in namespace "provisioning-7532" to be "Succeeded or Failed"
Jul  6 06:18:23.652: INFO: Pod "pod-subpath-test-preprovisionedpv-czq9": Phase="Pending", Reason="", readiness=false. Elapsed: 97.193465ms
Jul  6 06:18:25.750: INFO: Pod "pod-subpath-test-preprovisionedpv-czq9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.19547455s
STEP: Saw pod success
Jul  6 06:18:25.750: INFO: Pod "pod-subpath-test-preprovisionedpv-czq9" satisfied condition "Succeeded or Failed"
Jul  6 06:18:25.847: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-czq9 container test-container-subpath-preprovisionedpv-czq9: <nil>
STEP: delete the pod
Jul  6 06:18:26.052: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-czq9 to disappear
Jul  6 06:18:26.149: INFO: Pod pod-subpath-test-preprovisionedpv-czq9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-czq9
Jul  6 06:18:26.149: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-czq9" in namespace "provisioning-7532"
... skipping 42 lines ...
Jul  6 06:14:13.995: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-2279l2gb
STEP: creating a claim
Jul  6 06:14:14.093: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-tdd4
STEP: Creating a pod to test subpath
Jul  6 06:14:14.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tdd4" in namespace "provisioning-227" to be "Succeeded or Failed"
Jul  6 06:14:14.482: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 96.497871ms
Jul  6 06:14:16.580: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194221237s
Jul  6 06:14:18.678: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292302429s
Jul  6 06:14:20.775: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389086967s
Jul  6 06:14:22.873: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487148828s
Jul  6 06:14:24.971: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.585168887s
... skipping 14 lines ...
Jul  6 06:14:56.435: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.049515984s
Jul  6 06:14:58.534: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 44.148488429s
Jul  6 06:15:00.632: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 46.245798174s
Jul  6 06:15:02.730: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 48.343629535s
Jul  6 06:15:04.827: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 50.440663655s
STEP: Saw pod success
Jul  6 06:15:04.827: INFO: Pod "pod-subpath-test-dynamicpv-tdd4" satisfied condition "Succeeded or Failed"
Jul  6 06:15:04.923: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-tdd4 container test-container-subpath-dynamicpv-tdd4: <nil>
STEP: delete the pod
Jul  6 06:15:05.125: INFO: Waiting for pod pod-subpath-test-dynamicpv-tdd4 to disappear
Jul  6 06:15:05.221: INFO: Pod pod-subpath-test-dynamicpv-tdd4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tdd4
Jul  6 06:15:05.221: INFO: Deleting pod "pod-subpath-test-dynamicpv-tdd4" in namespace "provisioning-227"
STEP: Creating pod pod-subpath-test-dynamicpv-tdd4
STEP: Creating a pod to test subpath
Jul  6 06:15:05.415: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-tdd4" in namespace "provisioning-227" to be "Succeeded or Failed"
Jul  6 06:15:05.514: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 98.685149ms
Jul  6 06:15:07.611: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195418588s
Jul  6 06:15:09.709: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293697265s
Jul  6 06:15:11.806: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390855693s
Jul  6 06:15:13.904: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.489114431s
Jul  6 06:15:16.001: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.586287006s
Jul  6 06:15:18.100: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.684504526s
Jul  6 06:15:20.198: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.782432436s
Jul  6 06:15:22.294: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.87900565s
Jul  6 06:15:24.392: INFO: Pod "pod-subpath-test-dynamicpv-tdd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.976779503s
STEP: Saw pod success
Jul  6 06:15:24.392: INFO: Pod "pod-subpath-test-dynamicpv-tdd4" satisfied condition "Succeeded or Failed"
Jul  6 06:15:24.488: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-tdd4 container test-container-subpath-dynamicpv-tdd4: <nil>
STEP: delete the pod
Jul  6 06:15:24.697: INFO: Waiting for pod pod-subpath-test-dynamicpv-tdd4 to disappear
Jul  6 06:15:24.798: INFO: Pod pod-subpath-test-dynamicpv-tdd4 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-tdd4
Jul  6 06:15:24.798: INFO: Deleting pod "pod-subpath-test-dynamicpv-tdd4" in namespace "provisioning-227"
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":3,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:29.248: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:30.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-5880" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":10,"skipped":85,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":9,"skipped":51,"failed":0}
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:18:28.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 83 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:31.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7823" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource ","total":-1,"completed":10,"skipped":51,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":17,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:17:08.425: INFO: >>> kubeConfig: /root/.kube/config
... skipping 309 lines ...
Jul  6 06:17:57.209: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-vjlbq] to have phase Bound
Jul  6 06:17:57.306: INFO: PersistentVolumeClaim pvc-vjlbq found and phase=Bound (96.366966ms)
STEP: Deleting the previously created pod
Jul  6 06:18:03.795: INFO: Deleting pod "pvc-volume-tester-qs6xh" in namespace "csi-mock-volumes-9952"
Jul  6 06:18:03.895: INFO: Wait up to 5m0s for pod "pvc-volume-tester-qs6xh" to be fully deleted
STEP: Checking CSI driver logs
Jul  6 06:18:08.200: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/efde4070-2208-4066-b26c-7a318b0fd372/volumes/kubernetes.io~csi/pvc-4285b7de-815a-4f17-88a6-a0ab2c3027b8/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-qs6xh
Jul  6 06:18:08.200: INFO: Deleting pod "pvc-volume-tester-qs6xh" in namespace "csi-mock-volumes-9952"
STEP: Deleting claim pvc-vjlbq
Jul  6 06:18:08.490: INFO: Waiting up to 2m0s for PersistentVolume pvc-4285b7de-815a-4f17-88a6-a0ab2c3027b8 to get deleted
Jul  6 06:18:08.587: INFO: PersistentVolume pvc-4285b7de-815a-4f17-88a6-a0ab2c3027b8 found and phase=Released (96.973167ms)
Jul  6 06:18:10.684: INFO: PersistentVolume pvc-4285b7de-815a-4f17-88a6-a0ab2c3027b8 found and phase=Released (2.193981979s)
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":4,"skipped":37,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:34.680: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 46 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Jul  6 06:18:32.299: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7810" to be "Succeeded or Failed"
Jul  6 06:18:32.396: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 96.728811ms
Jul  6 06:18:34.493: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.193418456s
STEP: Saw pod success
Jul  6 06:18:34.493: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  6 06:18:34.589: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jul  6 06:18:34.792: INFO: Waiting for pod pod-host-path-test to disappear
Jul  6 06:18:34.888: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:34.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7810" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":11,"skipped":57,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:35.113: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 26 lines ...
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  6 06:18:29.943: INFO: The status of Pod server-envvars-da57bf63-99d8-41d1-aaa1-72b56483fbcf is Pending, waiting for it to be Running (with Ready = true)
Jul  6 06:18:32.041: INFO: The status of Pod server-envvars-da57bf63-99d8-41d1-aaa1-72b56483fbcf is Running (Ready = true)
Jul  6 06:18:32.344: INFO: Waiting up to 5m0s for pod "client-envvars-b38dfd8c-30fb-4d7e-8454-fc8dbe804b63" in namespace "pods-8318" to be "Succeeded or Failed"
Jul  6 06:18:32.441: INFO: Pod "client-envvars-b38dfd8c-30fb-4d7e-8454-fc8dbe804b63": Phase="Pending", Reason="", readiness=false. Elapsed: 96.493643ms
Jul  6 06:18:34.538: INFO: Pod "client-envvars-b38dfd8c-30fb-4d7e-8454-fc8dbe804b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.193797296s
STEP: Saw pod success
Jul  6 06:18:34.538: INFO: Pod "client-envvars-b38dfd8c-30fb-4d7e-8454-fc8dbe804b63" satisfied condition "Succeeded or Failed"
Jul  6 06:18:34.635: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod client-envvars-b38dfd8c-30fb-4d7e-8454-fc8dbe804b63 container env3cont: <nil>
STEP: delete the pod
Jul  6 06:18:34.834: INFO: Waiting for pod client-envvars-b38dfd8c-30fb-4d7e-8454-fc8dbe804b63 to disappear
Jul  6 06:18:34.931: INFO: Pod client-envvars-b38dfd8c-30fb-4d7e-8454-fc8dbe804b63 no longer exists
[AfterEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":14,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:35.184: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 14 lines ...
      Driver supports dynamic provisioning, skipping PreprovisionedPV pattern

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:241
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":9,"skipped":50,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:18:34.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:37.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-290" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":-1,"completed":10,"skipped":50,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:38.202: INFO: Only supported for providers [azure] (not aws)
... skipping 68 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-fa33203e-0491-48d4-8496-ff469a081b10
STEP: Creating a pod to test consume configMaps
Jul  6 06:18:38.951: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91536520-1f4b-4da6-a28f-dcda84ee5c74" in namespace "projected-4623" to be "Succeeded or Failed"
Jul  6 06:18:39.049: INFO: Pod "pod-projected-configmaps-91536520-1f4b-4da6-a28f-dcda84ee5c74": Phase="Pending", Reason="", readiness=false. Elapsed: 98.134562ms
Jul  6 06:18:41.146: INFO: Pod "pod-projected-configmaps-91536520-1f4b-4da6-a28f-dcda84ee5c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.19517116s
STEP: Saw pod success
Jul  6 06:18:41.147: INFO: Pod "pod-projected-configmaps-91536520-1f4b-4da6-a28f-dcda84ee5c74" satisfied condition "Succeeded or Failed"
Jul  6 06:18:41.243: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-projected-configmaps-91536520-1f4b-4da6-a28f-dcda84ee5c74 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 06:18:41.447: INFO: Waiting for pod pod-projected-configmaps-91536520-1f4b-4da6-a28f-dcda84ee5c74 to disappear
Jul  6 06:18:41.544: INFO: Pod pod-projected-configmaps-91536520-1f4b-4da6-a28f-dcda84ee5c74 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:41.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4623" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":59,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:41.752: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 131 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":11,"skipped":86,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:49.071: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 78 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:52.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8978" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":13,"skipped":83,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:52.731: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 204 lines ...
Jul  6 06:18:45.401: INFO: PersistentVolumeClaim pvc-tpklt found but phase is Pending instead of Bound.
Jul  6 06:18:47.498: INFO: PersistentVolumeClaim pvc-tpklt found and phase=Bound (8.48691096s)
Jul  6 06:18:47.498: INFO: Waiting up to 3m0s for PersistentVolume local-2t24m to have phase Bound
Jul  6 06:18:47.595: INFO: PersistentVolume local-2t24m found and phase=Bound (96.49221ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-llvn
STEP: Creating a pod to test subpath
Jul  6 06:18:47.887: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-llvn" in namespace "provisioning-3478" to be "Succeeded or Failed"
Jul  6 06:18:47.984: INFO: Pod "pod-subpath-test-preprovisionedpv-llvn": Phase="Pending", Reason="", readiness=false. Elapsed: 96.625676ms
Jul  6 06:18:50.082: INFO: Pod "pod-subpath-test-preprovisionedpv-llvn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194741525s
Jul  6 06:18:52.180: INFO: Pod "pod-subpath-test-preprovisionedpv-llvn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2928932s
STEP: Saw pod success
Jul  6 06:18:52.180: INFO: Pod "pod-subpath-test-preprovisionedpv-llvn" satisfied condition "Succeeded or Failed"
Jul  6 06:18:52.277: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-llvn container test-container-subpath-preprovisionedpv-llvn: <nil>
STEP: delete the pod
Jul  6 06:18:52.478: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-llvn to disappear
Jul  6 06:18:52.575: INFO: Pod pod-subpath-test-preprovisionedpv-llvn no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-llvn
Jul  6 06:18:52.575: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-llvn" in namespace "provisioning-3478"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":5,"skipped":29,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 35 lines ...
Jul  6 06:18:41.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
Jul  6 06:18:42.314: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 06:18:42.511: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8076" in namespace "provisioning-8076" to be "Succeeded or Failed"
Jul  6 06:18:42.608: INFO: Pod "hostpath-symlink-prep-provisioning-8076": Phase="Pending", Reason="", readiness=false. Elapsed: 96.758998ms
Jul  6 06:18:44.705: INFO: Pod "hostpath-symlink-prep-provisioning-8076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194139091s
STEP: Saw pod success
Jul  6 06:18:44.705: INFO: Pod "hostpath-symlink-prep-provisioning-8076" satisfied condition "Succeeded or Failed"
Jul  6 06:18:44.705: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8076" in namespace "provisioning-8076"
Jul  6 06:18:44.806: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8076" to be fully deleted
Jul  6 06:18:44.902: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-9t8r
STEP: Creating a pod to test subpath
Jul  6 06:18:45.001: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-9t8r" in namespace "provisioning-8076" to be "Succeeded or Failed"
Jul  6 06:18:45.098: INFO: Pod "pod-subpath-test-inlinevolume-9t8r": Phase="Pending", Reason="", readiness=false. Elapsed: 96.568773ms
Jul  6 06:18:47.195: INFO: Pod "pod-subpath-test-inlinevolume-9t8r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194311512s
Jul  6 06:18:49.293: INFO: Pod "pod-subpath-test-inlinevolume-9t8r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.292461231s
STEP: Saw pod success
Jul  6 06:18:49.294: INFO: Pod "pod-subpath-test-inlinevolume-9t8r" satisfied condition "Succeeded or Failed"
Jul  6 06:18:49.390: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-9t8r container test-container-subpath-inlinevolume-9t8r: <nil>
STEP: delete the pod
Jul  6 06:18:49.590: INFO: Waiting for pod pod-subpath-test-inlinevolume-9t8r to disappear
Jul  6 06:18:49.687: INFO: Pod pod-subpath-test-inlinevolume-9t8r no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-9t8r
Jul  6 06:18:49.687: INFO: Deleting pod "pod-subpath-test-inlinevolume-9t8r" in namespace "provisioning-8076"
STEP: Deleting pod
Jul  6 06:18:49.783: INFO: Deleting pod "pod-subpath-test-inlinevolume-9t8r" in namespace "provisioning-8076"
Jul  6 06:18:49.978: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-8076" in namespace "provisioning-8076" to be "Succeeded or Failed"
Jul  6 06:18:50.075: INFO: Pod "hostpath-symlink-prep-provisioning-8076": Phase="Pending", Reason="", readiness=false. Elapsed: 96.557666ms
Jul  6 06:18:52.172: INFO: Pod "hostpath-symlink-prep-provisioning-8076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193590569s
Jul  6 06:18:54.270: INFO: Pod "hostpath-symlink-prep-provisioning-8076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291351785s
STEP: Saw pod success
Jul  6 06:18:54.270: INFO: Pod "hostpath-symlink-prep-provisioning-8076" satisfied condition "Succeeded or Failed"
Jul  6 06:18:54.270: INFO: Deleting pod "hostpath-symlink-prep-provisioning-8076" in namespace "provisioning-8076"
Jul  6 06:18:54.372: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-8076" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:54.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8076" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":12,"skipped":69,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:54.672: INFO: Only supported for providers [openstack] (not aws)
... skipping 81 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Driver emptydir doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source","total":-1,"completed":3,"skipped":17,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:18:32.419: INFO: >>> kubeConfig: /root/.kube/config
... skipping 17 lines ...
Jul  6 06:18:46.761: INFO: PersistentVolumeClaim pvc-6vkd2 found but phase is Pending instead of Bound.
Jul  6 06:18:48.860: INFO: PersistentVolumeClaim pvc-6vkd2 found and phase=Bound (12.69327562s)
Jul  6 06:18:48.860: INFO: Waiting up to 3m0s for PersistentVolume local-gdc4n to have phase Bound
Jul  6 06:18:48.957: INFO: PersistentVolume local-gdc4n found and phase=Bound (96.96618ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-krwf
STEP: Creating a pod to test subpath
Jul  6 06:18:49.250: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-krwf" in namespace "provisioning-8341" to be "Succeeded or Failed"
Jul  6 06:18:49.349: INFO: Pod "pod-subpath-test-preprovisionedpv-krwf": Phase="Pending", Reason="", readiness=false. Elapsed: 99.314479ms
Jul  6 06:18:51.448: INFO: Pod "pod-subpath-test-preprovisionedpv-krwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197592482s
Jul  6 06:18:53.547: INFO: Pod "pod-subpath-test-preprovisionedpv-krwf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.296386823s
STEP: Saw pod success
Jul  6 06:18:53.547: INFO: Pod "pod-subpath-test-preprovisionedpv-krwf" satisfied condition "Succeeded or Failed"
Jul  6 06:18:53.644: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-krwf container test-container-volume-preprovisionedpv-krwf: <nil>
STEP: delete the pod
Jul  6 06:18:53.847: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-krwf to disappear
Jul  6 06:18:53.944: INFO: Pod pod-subpath-test-preprovisionedpv-krwf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-krwf
Jul  6 06:18:53.944: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-krwf" in namespace "provisioning-8341"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":17,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:55.326: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
Jul  6 06:18:53.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the container [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul  6 06:18:54.140: INFO: Waiting up to 5m0s for pod "security-context-5267e54b-f571-4936-85c9-95d5ec20e5d9" in namespace "security-context-2369" to be "Succeeded or Failed"
Jul  6 06:18:54.237: INFO: Pod "security-context-5267e54b-f571-4936-85c9-95d5ec20e5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 97.238003ms
Jul  6 06:18:56.371: INFO: Pod "security-context-5267e54b-f571-4936-85c9-95d5ec20e5d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.231223722s
STEP: Saw pod success
Jul  6 06:18:56.371: INFO: Pod "security-context-5267e54b-f571-4936-85c9-95d5ec20e5d9" satisfied condition "Succeeded or Failed"
Jul  6 06:18:56.504: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod security-context-5267e54b-f571-4936-85c9-95d5ec20e5d9 container test-container: <nil>
STEP: delete the pod
Jul  6 06:18:56.707: INFO: Waiting for pod security-context-5267e54b-f571-4936-85c9-95d5ec20e5d9 to disappear
Jul  6 06:18:56.803: INFO: Pod security-context-5267e54b-f571-4936-85c9-95d5ec20e5d9 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:18:56.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-2369" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":14,"skipped":106,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:18:57.007: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 35 lines ...
• [SLOW TEST:93.075 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:01.295: INFO: Only supported for providers [gce gke] (not aws)
... skipping 209 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":5,"skipped":18,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:18:56.806: INFO: >>> kubeConfig: /root/.kube/config
... skipping 13 lines ...
Jul  6 06:19:00.718: INFO: PersistentVolumeClaim pvc-rjrqm found but phase is Pending instead of Bound.
Jul  6 06:19:02.814: INFO: PersistentVolumeClaim pvc-rjrqm found and phase=Bound (2.193903092s)
Jul  6 06:19:02.815: INFO: Waiting up to 3m0s for PersistentVolume local-djqcg to have phase Bound
Jul  6 06:19:02.911: INFO: PersistentVolume local-djqcg found and phase=Bound (96.681258ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qgjm
STEP: Creating a pod to test subpath
Jul  6 06:19:03.207: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qgjm" in namespace "provisioning-5486" to be "Succeeded or Failed"
Jul  6 06:19:03.307: INFO: Pod "pod-subpath-test-preprovisionedpv-qgjm": Phase="Pending", Reason="", readiness=false. Elapsed: 100.423839ms
Jul  6 06:19:05.405: INFO: Pod "pod-subpath-test-preprovisionedpv-qgjm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198149789s
Jul  6 06:19:07.512: INFO: Pod "pod-subpath-test-preprovisionedpv-qgjm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305192312s
STEP: Saw pod success
Jul  6 06:19:07.512: INFO: Pod "pod-subpath-test-preprovisionedpv-qgjm" satisfied condition "Succeeded or Failed"
Jul  6 06:19:07.609: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-qgjm container test-container-subpath-preprovisionedpv-qgjm: <nil>
STEP: delete the pod
Jul  6 06:19:07.810: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qgjm to disappear
Jul  6 06:19:07.907: INFO: Pod pod-subpath-test-preprovisionedpv-qgjm no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qgjm
Jul  6 06:19:07.908: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qgjm" in namespace "provisioning-5486"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":18,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 06:19:10.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b803d2fe-e826-48e1-a028-10056bf8f4fe" in namespace "projected-6447" to be "Succeeded or Failed"
Jul  6 06:19:10.683: INFO: Pod "downwardapi-volume-b803d2fe-e826-48e1-a028-10056bf8f4fe": Phase="Pending", Reason="", readiness=false. Elapsed: 96.926949ms
Jul  6 06:19:12.783: INFO: Pod "downwardapi-volume-b803d2fe-e826-48e1-a028-10056bf8f4fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.19682699s
STEP: Saw pod success
Jul  6 06:19:12.783: INFO: Pod "downwardapi-volume-b803d2fe-e826-48e1-a028-10056bf8f4fe" satisfied condition "Succeeded or Failed"
Jul  6 06:19:12.881: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod downwardapi-volume-b803d2fe-e826-48e1-a028-10056bf8f4fe container client-container: <nil>
STEP: delete the pod
Jul  6 06:19:13.082: INFO: Waiting for pod downwardapi-volume-b803d2fe-e826-48e1-a028-10056bf8f4fe to disappear
Jul  6 06:19:13.179: INFO: Pod downwardapi-volume-b803d2fe-e826-48e1-a028-10056bf8f4fe no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:19:13.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6447" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":19,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:13.430: INFO: Only supported for providers [gce gke] (not aws)
... skipping 39 lines ...
Jul  6 06:16:10.357: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: the server is currently unable to handle the request (get pods dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f)
Jul  6 06:16:40.454: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-7758.svc.cluster.local from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: the server is currently unable to handle the request (get pods dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f)
Jul  6 06:17:10.556: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: the server is currently unable to handle the request (get pods dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f)
Jul  6 06:17:40.654: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: the server is currently unable to handle the request (get pods dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f)
Jul  6 06:18:10.751: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: the server is currently unable to handle the request (get pods dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f)
Jul  6 06:18:40.848: INFO: Unable to read jessie_udp@kubernetes.default from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: the server is currently unable to handle the request (get pods dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f)
Jul  6 06:19:09.969: FAIL: Unable to read jessie_tcp@kubernetes.default from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-7758/pods/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc000124010, 0x7f916d9f6878, 0x18, 0xc00049fa58)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc000124010, 0xc00385ea90, 0x29e9900, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
testing.tRunner(0xc000972480, 0x71cf618)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0706 06:19:09.970761   12454 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  6 06:19:09.970: Unable to read jessie_tcp@kubernetes.default from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-7758/pods/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f/proxy/results/jessie_tcp@kubernetes.default\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc000124010, 0x7f916d9f6878, 0x18, 0xc00049fa58)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc000124010, 0xc00385ea90, 0x29e9900, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x780f3c8, 0xc000124010, 0xc00049fa01, 0xc00049fa58, 0xc00385ea90, 0x67ba9a0, 0xc00385ea90)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x780f3c8, 0xc000124010, 0x12a05f200, 0x8bb2c97000, 0xc00385ea90, 0x6cf83e0, 0x24f8401)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003e9a930, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc002aa3f00, 0x10, 0x10, 0x6fb5f5e, 0x7, 0xc0033a3800, 0x78a18a8, 0xc0033542c0, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0005aec60, 0xc0033a3800, 0xc002aa3f00, 0x10, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.3()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:107 +0x6af\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000972480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000972480)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b\ntesting.tRunner(0xc000972480, 0x71cf618)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6b4ac20, 0xc003516200)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6b4ac20, 0xc003516200)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00328c160, 0x153, 0x87cadfb, 0x7d, 0xd9, 0xc0031fd000, 0xa8c)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x628e540, 0x76c5570)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc00328c160, 0x153, 0xc003e1f6c8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00328c160, 0x153, 0xc003e1f7b0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x7059d05, 0x24, 0xc003e1fa10, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc000124010, 0x7f916d9f6878, 0x18, 0xc00049fa58)
... skipping 267 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:90

  Jul  6 06:19:09.970: Unable to read jessie_tcp@kubernetes.default from pod dns-7758/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-7758/pods/dns-test-84ed6b33-d283-423e-bf91-55e16f15a30f/proxy/results/jessie_tcp@kubernetes.default": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
------------------------------
{"msg":"FAILED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":9,"skipped":83,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:14.124: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 54 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:19:14.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8617" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":10,"skipped":97,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:15.110: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-be45d7f2-05b8-4834-9b11-474992e1ad95
STEP: Creating a pod to test consume configMaps
Jul  6 06:19:15.862: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8f0398b6-b175-47c2-85c6-581f1f2ffcc6" in namespace "projected-4590" to be "Succeeded or Failed"
Jul  6 06:19:15.959: INFO: Pod "pod-projected-configmaps-8f0398b6-b175-47c2-85c6-581f1f2ffcc6": Phase="Pending", Reason="", readiness=false. Elapsed: 96.441719ms
Jul  6 06:19:18.056: INFO: Pod "pod-projected-configmaps-8f0398b6-b175-47c2-85c6-581f1f2ffcc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194168017s
STEP: Saw pod success
Jul  6 06:19:18.056: INFO: Pod "pod-projected-configmaps-8f0398b6-b175-47c2-85c6-581f1f2ffcc6" satisfied condition "Succeeded or Failed"
Jul  6 06:19:18.153: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-projected-configmaps-8f0398b6-b175-47c2-85c6-581f1f2ffcc6 container agnhost-container: <nil>
STEP: delete the pod
Jul  6 06:19:18.355: INFO: Waiting for pod pod-projected-configmaps-8f0398b6-b175-47c2-85c6-581f1f2ffcc6 to disappear
Jul  6 06:19:18.453: INFO: Pod pod-projected-configmaps-8f0398b6-b175-47c2-85c6-581f1f2ffcc6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:19:18.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4590" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":107,"failed":1,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:18.659: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 57 lines ...
Jul  6 06:19:23.175: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5655 explain e2e-test-crd-publish-openapi-4981-crds.spec'
Jul  6 06:19:23.628: INFO: stderr: ""
Jul  6 06:19:23.628: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-4981-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul  6 06:19:23.628: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5655 explain e2e-test-crd-publish-openapi-4981-crds.spec.bars'
Jul  6 06:19:24.087: INFO: stderr: ""
Jul  6 06:19:24.087: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-4981-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul  6 06:19:24.087: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5655 explain e2e-test-crd-publish-openapi-4981-crds.spec.bars2'
Jul  6 06:19:24.535: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:19:27.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5655" for this suite.
... skipping 2 lines ...
• [SLOW TEST:14.676 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":8,"skipped":35,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:28.154: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 27 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  6 06:18:28.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761149108, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761149108, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761149108, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761149108, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  6 06:18:31.791: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
Jul  6 06:18:42.182: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:18:52.479: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:19:02.678: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:19:12.881: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:19:23.079: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:19:23.079: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 426 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• Failure [64.248 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:19:23.079: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1275
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":4,"skipped":53,"failed":2,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
[BeforeEach] [sig-storage] Flexvolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:19:31.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename flexvolume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 62 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:19:31.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2105" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":9,"skipped":38,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: Destroying namespace "apply-2239" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller","total":-1,"completed":10,"skipped":39,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:33.593: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:19:35.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-7094" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":11,"skipped":48,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:19:35.345: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 41 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":6,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 69554 lines ...
Jul  6 06:39:44.206: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathwqp9l] to have phase Bound
Jul  6 06:39:44.302: INFO: PersistentVolumeClaim csi-hostpathwqp9l found but phase is Pending instead of Bound.
Jul  6 06:39:46.399: INFO: PersistentVolumeClaim csi-hostpathwqp9l found but phase is Pending instead of Bound.
Jul  6 06:39:48.496: INFO: PersistentVolumeClaim csi-hostpathwqp9l found and phase=Bound (4.290088364s)
STEP: Creating pod pod-subpath-test-dynamicpv-x4q6
STEP: Creating a pod to test subpath
Jul  6 06:39:48.786: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-x4q6" in namespace "provisioning-3725" to be "Succeeded or Failed"
Jul  6 06:39:48.884: INFO: Pod "pod-subpath-test-dynamicpv-x4q6": Phase="Pending", Reason="", readiness=false. Elapsed: 97.775133ms
Jul  6 06:39:50.982: INFO: Pod "pod-subpath-test-dynamicpv-x4q6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195803352s
Jul  6 06:39:53.079: INFO: Pod "pod-subpath-test-dynamicpv-x4q6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293112796s
Jul  6 06:39:55.177: INFO: Pod "pod-subpath-test-dynamicpv-x4q6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.391176816s
Jul  6 06:39:57.274: INFO: Pod "pod-subpath-test-dynamicpv-x4q6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.487911281s
Jul  6 06:39:59.371: INFO: Pod "pod-subpath-test-dynamicpv-x4q6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.585323823s
Jul  6 06:40:01.469: INFO: Pod "pod-subpath-test-dynamicpv-x4q6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.68278464s
STEP: Saw pod success
Jul  6 06:40:01.469: INFO: Pod "pod-subpath-test-dynamicpv-x4q6" satisfied condition "Succeeded or Failed"
Jul  6 06:40:01.566: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-x4q6 container test-container-subpath-dynamicpv-x4q6: <nil>
STEP: delete the pod
Jul  6 06:40:01.768: INFO: Waiting for pod pod-subpath-test-dynamicpv-x4q6 to disappear
Jul  6 06:40:01.864: INFO: Pod pod-subpath-test-dynamicpv-x4q6 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-x4q6
Jul  6 06:40:01.865: INFO: Deleting pod "pod-subpath-test-dynamicpv-x4q6" in namespace "provisioning-3725"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":29,"skipped":299,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:5.469 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:265
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":24,"skipped":232,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 5 lines ...
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0706 06:35:46.189184   12414 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0706 06:35:46.189254   12414 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  6 06:40:46.384: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:40:46.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5947" for this suite.


• [SLOW TEST:307.575 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":36,"skipped":204,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:40:46.591: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 93 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:40:46.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":37,"skipped":220,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and write from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:238
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":49,"skipped":385,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:40:49.290: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 31 lines ...
STEP: Destroying namespace "services-1692" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":50,"skipped":390,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:40:50.085: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 203 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should return command exit codes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:499
      execing into a container with a failing command
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:505
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":30,"skipped":301,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:40:55.239: INFO: Driver "csi-hostpath" does not support topology - skipping
... skipping 5 lines ...
[sig-storage] CSI Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver "csi-hostpath" does not support topology - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:92
------------------------------
... skipping 97 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":25,"skipped":294,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SS
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 06:40:50.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729" in namespace "projected-8654" to be "Succeeded or Failed"
Jul  6 06:40:50.885: INFO: Pod "downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729": Phase="Pending", Reason="", readiness=false. Elapsed: 96.56294ms
Jul  6 06:40:52.984: INFO: Pod "downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195357103s
Jul  6 06:40:55.086: INFO: Pod "downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729": Phase="Pending", Reason="", readiness=false. Elapsed: 4.297452294s
Jul  6 06:40:57.183: INFO: Pod "downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.394186454s
STEP: Saw pod success
Jul  6 06:40:57.183: INFO: Pod "downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729" satisfied condition "Succeeded or Failed"
Jul  6 06:40:57.279: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729 container client-container: <nil>
STEP: delete the pod
Jul  6 06:40:57.484: INFO: Waiting for pod downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729 to disappear
Jul  6 06:40:57.580: INFO: Pod downwardapi-volume-3943314a-7814-48c1-9aad-8a778c30f729 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:7.580 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":412,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:40:57.784: INFO: Driver local doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:40:58.480: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:40:59.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-6648" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":31,"skipped":312,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1037
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":25,"skipped":234,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:40:59.325: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 81 lines ...
• [SLOW TEST:6.081 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":296,"failed":1,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:103

    Only supported for node OS distro [gci ubuntu custom] (not debian)

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/volumes.go:69
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets","total":-1,"completed":29,"skipped":290,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:39:47.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
• [SLOW TEST:77.355 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete successful finished jobs with limit of one successful job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:278
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete successful finished jobs with limit of one successful job","total":-1,"completed":30,"skipped":290,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:04.548: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 26 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Jul  6 06:40:59.875: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 06:40:59.983: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-lc5t
STEP: Creating a pod to test subpath
Jul  6 06:41:00.084: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-lc5t" in namespace "provisioning-3812" to be "Succeeded or Failed"
Jul  6 06:41:00.197: INFO: Pod "pod-subpath-test-inlinevolume-lc5t": Phase="Pending", Reason="", readiness=false. Elapsed: 113.28173ms
Jul  6 06:41:02.298: INFO: Pod "pod-subpath-test-inlinevolume-lc5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213870543s
Jul  6 06:41:04.395: INFO: Pod "pod-subpath-test-inlinevolume-lc5t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311237408s
STEP: Saw pod success
Jul  6 06:41:04.395: INFO: Pod "pod-subpath-test-inlinevolume-lc5t" satisfied condition "Succeeded or Failed"
Jul  6 06:41:04.515: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-lc5t container test-container-volume-inlinevolume-lc5t: <nil>
STEP: delete the pod
Jul  6 06:41:04.721: INFO: Waiting for pod pod-subpath-test-inlinevolume-lc5t to disappear
Jul  6 06:41:04.817: INFO: Pod pod-subpath-test-inlinevolume-lc5t no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-lc5t
Jul  6 06:41:04.817: INFO: Deleting pod "pod-subpath-test-inlinevolume-lc5t" in namespace "provisioning-3812"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":26,"skipped":247,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:7.030 seconds]
[sig-instrumentation] MetricsGrabber
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23
  should grab all metrics from API server.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/monitoring/metrics_grabber.go:65
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":32,"skipped":316,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:06.311: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 91 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-c1255ee2-9ee4-480e-b2ec-e5861b48931f
STEP: Creating a pod to test consume secrets
Jul  6 06:41:05.612: INFO: Waiting up to 5m0s for pod "pod-secrets-ce325d1a-8116-4629-8dd3-7298d30f711b" in namespace "secrets-9645" to be "Succeeded or Failed"
Jul  6 06:41:05.708: INFO: Pod "pod-secrets-ce325d1a-8116-4629-8dd3-7298d30f711b": Phase="Pending", Reason="", readiness=false. Elapsed: 95.407006ms
Jul  6 06:41:07.805: INFO: Pod "pod-secrets-ce325d1a-8116-4629-8dd3-7298d30f711b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.192094878s
STEP: Saw pod success
Jul  6 06:41:07.805: INFO: Pod "pod-secrets-ce325d1a-8116-4629-8dd3-7298d30f711b" satisfied condition "Succeeded or Failed"
Jul  6 06:41:07.900: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-secrets-ce325d1a-8116-4629-8dd3-7298d30f711b container secret-volume-test: <nil>
STEP: delete the pod
Jul  6 06:41:08.097: INFO: Waiting for pod pod-secrets-ce325d1a-8116-4629-8dd3-7298d30f711b to disappear
Jul  6 06:41:08.194: INFO: Pod pod-secrets-ce325d1a-8116-4629-8dd3-7298d30f711b no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:08.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9645" for this suite.
STEP: Destroying namespace "secret-namespace-1293" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":294,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:08.511: INFO: Only supported for providers [gce gke] (not aws)
... skipping 102 lines ...
Jul  6 06:40:16.493: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  6 06:40:16.592: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpathhmqll] to have phase Bound
Jul  6 06:40:16.696: INFO: PersistentVolumeClaim csi-hostpathhmqll found but phase is Pending instead of Bound.
Jul  6 06:40:18.793: INFO: PersistentVolumeClaim csi-hostpathhmqll found and phase=Bound (2.20080697s)
STEP: Expanding non-expandable pvc
Jul  6 06:40:18.986: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  6 06:40:19.181: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:21.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:23.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:25.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:27.374: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:29.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:31.375: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:33.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:35.375: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:37.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:39.375: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:41.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:43.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:45.376: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:47.377: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:49.375: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  6 06:40:49.568: INFO: Error updating pvc csi-hostpathhmqll: persistentvolumeclaims "csi-hostpathhmqll" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  6 06:40:49.568: INFO: Deleting PersistentVolumeClaim "csi-hostpathhmqll"
Jul  6 06:40:49.667: INFO: Waiting up to 5m0s for PersistentVolume pvc-f8b19344-a342-44cc-ab5b-5f2ab3caa21e to get deleted
Jul  6 06:40:49.763: INFO: PersistentVolume pvc-f8b19344-a342-44cc-ab5b-5f2ab3caa21e was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-9111
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":27,"skipped":282,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:12.349: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:13.276: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 67 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul  6 06:41:09.010: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  6 06:41:09.010: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jmvv
STEP: Creating a pod to test subpath
Jul  6 06:41:09.111: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jmvv" in namespace "provisioning-8463" to be "Succeeded or Failed"
Jul  6 06:41:09.208: INFO: Pod "pod-subpath-test-inlinevolume-jmvv": Phase="Pending", Reason="", readiness=false. Elapsed: 97.014716ms
Jul  6 06:41:11.304: INFO: Pod "pod-subpath-test-inlinevolume-jmvv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193050228s
Jul  6 06:41:13.401: INFO: Pod "pod-subpath-test-inlinevolume-jmvv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289767243s
STEP: Saw pod success
Jul  6 06:41:13.401: INFO: Pod "pod-subpath-test-inlinevolume-jmvv" satisfied condition "Succeeded or Failed"
Jul  6 06:41:13.499: INFO: Trying to get logs from node ip-172-20-59-118.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-jmvv container test-container-volume-inlinevolume-jmvv: <nil>
STEP: delete the pod
Jul  6 06:41:13.700: INFO: Waiting for pod pod-subpath-test-inlinevolume-jmvv to disappear
Jul  6 06:41:13.796: INFO: Pod pod-subpath-test-inlinevolume-jmvv no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jmvv
Jul  6 06:41:13.796: INFO: Deleting pod "pod-subpath-test-inlinevolume-jmvv" in namespace "provisioning-8463"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":32,"skipped":303,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:14.200: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 33 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-985
STEP: Creating statefulset with conflicting port in namespace statefulset-985
STEP: Waiting until pod test-pod will start running in namespace statefulset-985
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-985
Jul  6 06:40:50.172: INFO: Observed stateful pod in namespace: statefulset-985, name: ss-0, uid: 78d9bef6-ad09-40f8-8ceb-cae24130f5bc, status phase: Pending. Waiting for statefulset controller to delete.
Jul  6 06:40:50.586: INFO: Observed stateful pod in namespace: statefulset-985, name: ss-0, uid: 78d9bef6-ad09-40f8-8ceb-cae24130f5bc, status phase: Failed. Waiting for statefulset controller to delete.
Jul  6 06:40:50.592: INFO: Observed stateful pod in namespace: statefulset-985, name: ss-0, uid: 78d9bef6-ad09-40f8-8ceb-cae24130f5bc, status phase: Failed. Waiting for statefulset controller to delete.
Jul  6 06:40:50.596: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-985
STEP: Removing pod with conflicting port in namespace statefulset-985
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-985 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Jul  6 06:40:57.089: INFO: Deleting all statefulset in ns statefulset-985
... skipping 11 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":38,"skipped":223,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:18.185: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 21 lines ...
Jul  6 06:41:18.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  6 06:41:18.782: INFO: Waiting up to 5m0s for pod "downward-api-bade4722-4037-4510-a176-6b0f2bc6b695" in namespace "downward-api-9004" to be "Succeeded or Failed"
Jul  6 06:41:18.879: INFO: Pod "downward-api-bade4722-4037-4510-a176-6b0f2bc6b695": Phase="Pending", Reason="", readiness=false. Elapsed: 97.434767ms
Jul  6 06:41:20.977: INFO: Pod "downward-api-bade4722-4037-4510-a176-6b0f2bc6b695": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.195149338s
STEP: Saw pod success
Jul  6 06:41:20.977: INFO: Pod "downward-api-bade4722-4037-4510-a176-6b0f2bc6b695" satisfied condition "Succeeded or Failed"
Jul  6 06:41:21.074: INFO: Trying to get logs from node ip-172-20-36-135.eu-west-2.compute.internal pod downward-api-bade4722-4037-4510-a176-6b0f2bc6b695 container dapi-container: <nil>
STEP: delete the pod
Jul  6 06:41:21.275: INFO: Waiting for pod downward-api-bade4722-4037-4510-a176-6b0f2bc6b695 to disappear
Jul  6 06:41:21.372: INFO: Pod downward-api-bade4722-4037-4510-a176-6b0f2bc6b695 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:21.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9004" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":225,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:8.525 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:21.843: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 165 lines ...
Jul  6 06:39:21.455: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6172 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.64.78.36:80 2>&1 || true; echo; done'
Jul  6 06:41:13.596: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ echo\n+ wget -q -T 1 -O - http://100.64.78.36:80\n+ true\n+ echo\n"
Jul  6 06:41:13.596: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nup-down-1-vqlpg\nup-down-1-vqlpg\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nup-down-1-vqlpg\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\nup-down-1-vqlpg\nup-down-1-vqlpg\nwget: download timed out\n\nwget: download timed out\n\nup-down-1-vqlpg\nwget: download timed out\n\n"
Jul  6 06:41:13.596: INFO: Unable to reach the following endpoints of service 100.64.78.36: map[up-down-1-98pfm:{} up-down-1-dfbvz:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6172
STEP: Deleting pod verify-service-up-exec-pod-zcrwk in namespace services-6172
Jul  6 06:41:18.801: FAIL: Unexpected error:
    <*errors.errorString | 0xc0031f4080>: {
        s: "service verification failed for: 100.64.78.36\nexpected [up-down-1-98pfm up-down-1-dfbvz up-down-1-vqlpg]\nreceived [up-down-1-vqlpg wget: download timed out]",
    }
    service verification failed for: 100.64.78.36
    expected [up-down-1-98pfm up-down-1-dfbvz up-down-1-vqlpg]
    received [up-down-1-vqlpg wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.8()
... skipping 305 lines ...
• Failure [340.923 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to up and down services [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1033

  Jul  6 06:41:18.801: Unexpected error:
      <*errors.errorString | 0xc0031f4080>: {
          s: "service verification failed for: 100.64.78.36\nexpected [up-down-1-98pfm up-down-1-dfbvz up-down-1-vqlpg]\nreceived [up-down-1-vqlpg wget: download timed out]",
      }
      service verification failed for: 100.64.78.36
      expected [up-down-1-98pfm up-down-1-dfbvz up-down-1-vqlpg]
      received [up-down-1-vqlpg wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1049
------------------------------
{"msg":"FAILED [sig-network] Services should be able to up and down services","total":-1,"completed":33,"skipped":264,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-network] Services should be able to up and down services"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 15 lines ...
Jul  6 06:41:16.176: INFO: PersistentVolumeClaim pvc-8f977 found but phase is Pending instead of Bound.
Jul  6 06:41:18.278: INFO: PersistentVolumeClaim pvc-8f977 found and phase=Bound (2.199202071s)
Jul  6 06:41:18.278: INFO: Waiting up to 3m0s for PersistentVolume local-6b44k to have phase Bound
Jul  6 06:41:18.376: INFO: PersistentVolume local-6b44k found and phase=Bound (97.99675ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-5qm8
STEP: Creating a pod to test subpath
Jul  6 06:41:18.667: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-5qm8" in namespace "provisioning-7140" to be "Succeeded or Failed"
Jul  6 06:41:18.764: INFO: Pod "pod-subpath-test-preprovisionedpv-5qm8": Phase="Pending", Reason="", readiness=false. Elapsed: 96.312908ms
Jul  6 06:41:20.860: INFO: Pod "pod-subpath-test-preprovisionedpv-5qm8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193170312s
Jul  6 06:41:22.958: INFO: Pod "pod-subpath-test-preprovisionedpv-5qm8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.291007345s
STEP: Saw pod success
Jul  6 06:41:22.958: INFO: Pod "pod-subpath-test-preprovisionedpv-5qm8" satisfied condition "Succeeded or Failed"
Jul  6 06:41:23.055: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-5qm8 container test-container-subpath-preprovisionedpv-5qm8: <nil>
STEP: delete the pod
Jul  6 06:41:23.261: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-5qm8 to disappear
Jul  6 06:41:23.358: INFO: Pod pod-subpath-test-preprovisionedpv-5qm8 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-5qm8
Jul  6 06:41:23.358: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-5qm8" in namespace "provisioning-7140"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":28,"skipped":283,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:24.788: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:26.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5868" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":230,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Jul  6 06:40:57.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
Jul  6 06:40:58.317: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  6 06:40:58.513: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5060" in namespace "provisioning-5060" to be "Succeeded or Failed"
Jul  6 06:40:58.609: INFO: Pod "hostpath-symlink-prep-provisioning-5060": Phase="Pending", Reason="", readiness=false. Elapsed: 96.456586ms
Jul  6 06:41:00.714: INFO: Pod "hostpath-symlink-prep-provisioning-5060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.201387146s
STEP: Saw pod success
Jul  6 06:41:00.714: INFO: Pod "hostpath-symlink-prep-provisioning-5060" satisfied condition "Succeeded or Failed"
Jul  6 06:41:00.714: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5060" in namespace "provisioning-5060"
Jul  6 06:41:00.822: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5060" to be fully deleted
Jul  6 06:41:00.919: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-fzkd
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 06:41:01.018: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-fzkd" in namespace "provisioning-5060" to be "Succeeded or Failed"
Jul  6 06:41:01.115: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Pending", Reason="", readiness=false. Elapsed: 97.029428ms
Jul  6 06:41:03.214: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195215018s
Jul  6 06:41:05.311: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 4.292522997s
Jul  6 06:41:07.409: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 6.390194318s
Jul  6 06:41:09.506: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 8.48730303s
Jul  6 06:41:11.604: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 10.585064029s
Jul  6 06:41:13.702: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 12.683105921s
Jul  6 06:41:15.800: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 14.781948649s
Jul  6 06:41:17.898: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 16.879181729s
Jul  6 06:41:19.995: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 18.976496702s
Jul  6 06:41:22.093: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Running", Reason="", readiness=true. Elapsed: 21.074506304s
Jul  6 06:41:24.191: INFO: Pod "pod-subpath-test-inlinevolume-fzkd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.172232248s
STEP: Saw pod success
Jul  6 06:41:24.191: INFO: Pod "pod-subpath-test-inlinevolume-fzkd" satisfied condition "Succeeded or Failed"
Jul  6 06:41:24.287: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-inlinevolume-fzkd container test-container-subpath-inlinevolume-fzkd: <nil>
STEP: delete the pod
Jul  6 06:41:24.487: INFO: Waiting for pod pod-subpath-test-inlinevolume-fzkd to disappear
Jul  6 06:41:24.587: INFO: Pod pod-subpath-test-inlinevolume-fzkd no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-fzkd
Jul  6 06:41:24.587: INFO: Deleting pod "pod-subpath-test-inlinevolume-fzkd" in namespace "provisioning-5060"
STEP: Deleting pod
Jul  6 06:41:24.683: INFO: Deleting pod "pod-subpath-test-inlinevolume-fzkd" in namespace "provisioning-5060"
Jul  6 06:41:24.877: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-5060" in namespace "provisioning-5060" to be "Succeeded or Failed"
Jul  6 06:41:24.973: INFO: Pod "hostpath-symlink-prep-provisioning-5060": Phase="Pending", Reason="", readiness=false. Elapsed: 96.37591ms
Jul  6 06:41:27.071: INFO: Pod "hostpath-symlink-prep-provisioning-5060": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.193788878s
STEP: Saw pod success
Jul  6 06:41:27.071: INFO: Pod "hostpath-symlink-prep-provisioning-5060" satisfied condition "Succeeded or Failed"
Jul  6 06:41:27.071: INFO: Deleting pod "hostpath-symlink-prep-provisioning-5060" in namespace "provisioning-5060"
Jul  6 06:41:27.170: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-5060" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:27.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-5060" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":52,"skipped":424,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-network] Ingress API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:27.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-9666" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":29,"skipped":291,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:27.872: INFO: Only supported for providers [gce gke] (not aws)
... skipping 36 lines ...
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
Jul  6 06:40:38.845: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:40:49.141: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:40:59.347: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:41:09.648: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:41:19.848: INFO: Waiting for webhook configuration to be ready...
Jul  6 06:41:19.849: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 509 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:41:19.849: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:988
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":18,"skipped":131,"failed":4,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:41:29.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:30.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4509" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":19,"skipped":131,"failed":4,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":34,"skipped":267,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-network] Services should be able to up and down services"]}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] KubeProxy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:13.104 seconds]
[sig-network] KubeProxy
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should set TCP CLOSE_WAIT timeout [Privileged]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/kube_proxy.go:52
------------------------------
{"msg":"PASSED [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]","total":-1,"completed":6,"skipped":52,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:35.060: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
• [SLOW TEST:5.517 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:265
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":7,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:40.600: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 101 lines ...
Jul  6 06:41:10.459: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  6 06:41:10.561: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath2r9xd] to have phase Bound
Jul  6 06:41:10.657: INFO: PersistentVolumeClaim csi-hostpath2r9xd found but phase is Pending instead of Bound.
Jul  6 06:41:12.753: INFO: PersistentVolumeClaim csi-hostpath2r9xd found and phase=Bound (2.192548921s)
STEP: Creating pod pod-subpath-test-dynamicpv-9595
STEP: Creating a pod to test subpath
Jul  6 06:41:13.050: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-9595" in namespace "provisioning-6443" to be "Succeeded or Failed"
Jul  6 06:41:13.152: INFO: Pod "pod-subpath-test-dynamicpv-9595": Phase="Pending", Reason="", readiness=false. Elapsed: 101.57578ms
Jul  6 06:41:15.249: INFO: Pod "pod-subpath-test-dynamicpv-9595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198737791s
Jul  6 06:41:17.346: INFO: Pod "pod-subpath-test-dynamicpv-9595": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295566316s
Jul  6 06:41:19.442: INFO: Pod "pod-subpath-test-dynamicpv-9595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.39240655s
STEP: Saw pod success
Jul  6 06:41:19.442: INFO: Pod "pod-subpath-test-dynamicpv-9595" satisfied condition "Succeeded or Failed"
Jul  6 06:41:19.539: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-9595 container test-container-volume-dynamicpv-9595: <nil>
STEP: delete the pod
Jul  6 06:41:19.748: INFO: Waiting for pod pod-subpath-test-dynamicpv-9595 to disappear
Jul  6 06:41:19.844: INFO: Pod pod-subpath-test-dynamicpv-9595 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-9595
Jul  6 06:41:19.844: INFO: Deleting pod "pod-subpath-test-dynamicpv-9595" in namespace "provisioning-6443"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","total":-1,"completed":27,"skipped":249,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:42.667: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:43.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9002" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":66,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:44.078: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":30,"skipped":304,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:41:31.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:351
STEP: Creating a job
STEP: Ensuring job exceed backofflimit
STEP: Checking that 2 pod created and status is failed
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:44.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5645" for this suite.


• [SLOW TEST:12.969 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should fail to exceed backoffLimit
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:351
------------------------------
{"msg":"PASSED [sig-apps] Job should fail to exceed backoffLimit","total":-1,"completed":31,"skipped":304,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-66ll
STEP: Creating a pod to test atomic-volume-subpath
Jul  6 06:41:28.259: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-66ll" in namespace "subpath-6841" to be "Succeeded or Failed"
Jul  6 06:41:28.356: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Pending", Reason="", readiness=false. Elapsed: 96.426085ms
Jul  6 06:41:30.453: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 2.19330402s
Jul  6 06:41:32.550: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 4.290765431s
Jul  6 06:41:34.648: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 6.38844509s
Jul  6 06:41:36.746: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 8.486105546s
Jul  6 06:41:38.845: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 10.585219886s
Jul  6 06:41:40.949: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 12.689119477s
Jul  6 06:41:43.045: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 14.785735357s
Jul  6 06:41:45.143: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 16.883405356s
Jul  6 06:41:47.240: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 18.980355457s
Jul  6 06:41:49.336: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Running", Reason="", readiness=true. Elapsed: 21.076794021s
Jul  6 06:41:51.433: INFO: Pod "pod-subpath-test-configmap-66ll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.173945013s
STEP: Saw pod success
Jul  6 06:41:51.433: INFO: Pod "pod-subpath-test-configmap-66ll" satisfied condition "Succeeded or Failed"
Jul  6 06:41:51.530: INFO: Trying to get logs from node ip-172-20-56-54.eu-west-2.compute.internal pod pod-subpath-test-configmap-66ll container test-container-subpath-configmap-66ll: <nil>
STEP: delete the pod
Jul  6 06:41:51.728: INFO: Waiting for pod pod-subpath-test-configmap-66ll to disappear
Jul  6 06:41:51.825: INFO: Pod pod-subpath-test-configmap-66ll no longer exists
STEP: Deleting pod pod-subpath-test-configmap-66ll
Jul  6 06:41:51.825: INFO: Deleting pod "pod-subpath-test-configmap-66ll" in namespace "subpath-6841"
... skipping 8 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":53,"skipped":425,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:52.138: INFO: Only supported for providers [gce gke] (not aws)
... skipping 86 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:41:56.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7598" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints","total":-1,"completed":54,"skipped":446,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:41:56.477: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 100 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","total":-1,"completed":22,"skipped":180,"failed":1,"failures":["[sig-network] Services should create endpoints for unready pods"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:42:01.744: INFO: Only supported for providers [openstack] (not aws)
... skipping 79 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":18,"skipped":117,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:42:02.206: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 32 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:42:02.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8842" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":23,"skipped":186,"failed":1,"failures":["[sig-network] Services should create endpoints for unready pods"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:42:03.157: INFO: Only supported for providers [gce gke] (not aws)
... skipping 89 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":32,"skipped":308,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 101 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity used, have capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":41,"skipped":234,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:42:07.841: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 65 lines ...
Jul  6 06:40:32.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  6 06:40:32.800: INFO: created pod
Jul  6 06:40:32.800: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1286" to be "Succeeded or Failed"
Jul  6 06:40:32.897: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 96.813225ms
Jul  6 06:40:34.994: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 2.193934117s
Jul  6 06:40:37.092: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 4.291638746s
Jul  6 06:40:39.189: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 6.389194721s
Jul  6 06:40:41.287: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 8.486862164s
Jul  6 06:40:43.384: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 10.584120998s
... skipping 18 lines ...
Jul  6 06:41:23.262: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 50.462303805s
Jul  6 06:41:25.359: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 52.559442635s
Jul  6 06:41:27.458: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 54.657584356s
Jul  6 06:41:29.556: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 56.755747382s
Jul  6 06:41:31.653: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 58.853211306s
Jul  6 06:41:33.754: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.953765346s
Jul  6 06:41:35.853: INFO: Pod "oidc-discovery-validator": Phase="Failed", Reason="", readiness=false. Elapsed: 1m3.053049619s
Jul  6 06:42:05.855: INFO: polling logs
Jul  6 06:42:05.954: INFO: Pod logs: 
2021/07/06 06:40:33 OK: Got token
2021/07/06 06:40:33 validating with in-cluster discovery
2021/07/06 06:40:33 OK: got issuer https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery
2021/07/06 06:40:33 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery", Subject:"system:serviceaccount:svcaccounts-1286:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1625554232, NotBefore:1625553632, IssuedAt:1625553632, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1286", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"028523a8-c0b3-4d43-9b2d-e4161efca729"}}}
2021/07/06 06:41:03 failed to validate with in-cluster discovery: Get "https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery/.well-known/openid-configuration": dial tcp: i/o timeout
2021/07/06 06:41:03 falling back to validating with external discovery
2021/07/06 06:41:03 OK: got issuer https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery
2021/07/06 06:41:03 Full, not-validated claims: 
openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery", Subject:"system:serviceaccount:svcaccounts-1286:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1625554232, NotBefore:1625553632, IssuedAt:1625553632, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1286", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"028523a8-c0b3-4d43-9b2d-e4161efca729"}}}
2021/07/06 06:41:33 Get "https://k8s-kops-prow.s3.us-west-1.amazonaws.com/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery/.well-known/openid-configuration": dial tcp: i/o timeout

Jul  6 06:42:05.954: FAIL: Unexpected error:
    <*errors.errorString | 0xc003d3fa90>: {
        s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.56.54 PodIP:100.96.1.92 PodIPs:[{IP:100.96.1.92}] StartTime:2021-07-06 06:40:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 06:40:33 +0000 UTC,FinishedAt:2021-07-06 06:41:33 +0000 UTC,ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba Started:0xc00404d520}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
    }
    pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.56.54 PodIP:100.96.1.92 PodIPs:[{IP:100.96.1.92}] StartTime:2021-07-06 06:40:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 06:40:33 +0000 UTC,FinishedAt:2021-07-06 06:41:33 +0000 UTC,ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba Started:0xc00404d520}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/auth.glob..func6.7()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789 +0xc45
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000103800)
... skipping 10 lines ...
STEP: Found 4 events.
Jul  6 06:42:06.150: INFO: At 2021-07-06 06:40:32 +0000 UTC - event for oidc-discovery-validator: {default-scheduler } Scheduled: Successfully assigned svcaccounts-1286/oidc-discovery-validator to ip-172-20-56-54.eu-west-2.compute.internal
Jul  6 06:42:06.150: INFO: At 2021-07-06 06:40:33 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-56-54.eu-west-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
Jul  6 06:42:06.150: INFO: At 2021-07-06 06:40:33 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-56-54.eu-west-2.compute.internal} Created: Created container oidc-discovery-validator
Jul  6 06:42:06.150: INFO: At 2021-07-06 06:40:33 +0000 UTC - event for oidc-discovery-validator: {kubelet ip-172-20-56-54.eu-west-2.compute.internal} Started: Started container oidc-discovery-validator
Jul  6 06:42:06.247: INFO: POD                       NODE                                        PHASE   GRACE  CONDITIONS
Jul  6 06:42:06.247: INFO: oidc-discovery-validator  ip-172-20-56-54.eu-west-2.compute.internal  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:40:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:41:34 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:41:34 +0000 UTC ContainersNotReady containers with unready status: [oidc-discovery-validator]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-07-06 06:40:32 +0000 UTC  }]
Jul  6 06:42:06.248: INFO: 
Jul  6 06:42:06.345: INFO: 
Logging node info for node ip-172-20-32-57.eu-west-2.compute.internal
Jul  6 06:42:06.445: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-32-57.eu-west-2.compute.internal    69f850ad-0e3c-45b2-8481-0c592e1b2544 45720 0 2021-07-06 06:08:55 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-west-2 failure-domain.beta.kubernetes.io/zone:eu-west-2a kops.k8s.io/instancegroup:nodes-eu-west-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-32-57.eu-west-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-west-2a topology.hostpath.csi/node:ip-172-20-32-57.eu-west-2.compute.internal topology.kubernetes.io/region:eu-west-2 topology.kubernetes.io/zone:eu-west-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-4484":"ip-172-20-32-57.eu-west-2.compute.internal","ebs.csi.aws.com":"i-04b2469cf8d928a72"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-06 06:08:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-06 06:08:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kube-controller-manager Update v1 2021-07-06 06:41:55 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-06 06:41:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-west-2a/i-04b2469cf8d928a72,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-06 06:41:57 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-06 06:41:57 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-06 06:41:57 +0000 UTC,LastTransitionTime:2021-07-06 06:08:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-06 06:41:57 +0000 UTC,LastTransitionTime:2021-07-06 06:09:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.32.57,},NodeAddress{Type:ExternalIP,Address:35.176.18.223,},NodeAddress{Type:InternalDNS,Address:ip-172-20-32-57.eu-west-2.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-32-57.eu-west-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-35-176-18-223.eu-west-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec224b3f3013408803992ba3241c2065,SystemUUID:ec224b3f-3013-4088-0399-2ba3241c2065,BootID:78da66d3-86e3-4ca8-a949-169540ab78f8,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0c759b25d3a011e4f kubernetes.io/csi/ebs.csi.aws.com^vol-0f4f288c605d263fb],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0c759b25d3a011e4f,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f4f288c605d263fb,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-013a49a3266463e4c,DevicePath:,},},Config:nil,},}
Jul  6 06:42:06.445: INFO: 
Logging kubelet events for node ip-172-20-32-57.eu-west-2.compute.internal
... skipping 251 lines ...
• Failure [98.272 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:42:05.954: Unexpected error:
      <*errors.errorString | 0xc003d3fa90>: {
          s: "pod \"oidc-discovery-validator\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.56.54 PodIP:100.96.1.92 PodIPs:[{IP:100.96.1.92}] StartTime:2021-07-06 06:40:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 06:40:33 +0000 UTC,FinishedAt:2021-07-06 06:41:33 +0000 UTC,ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba Started:0xc00404d520}] QOSClass:BestEffort EphemeralContainerStatuses:[]}",
      }
      pod "oidc-discovery-validator" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:41:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [oidc-discovery-validator]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.20.56.54 PodIP:100.96.1.92 PodIPs:[{IP:100.96.1.92}] StartTime:2021-07-06 06:40:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:oidc-discovery-validator State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2021-07-06 06:40:33 +0000 UTC,FinishedAt:2021-07-06 06:41:33 +0000 UTC,ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://465cbf549454a79e01241eede21e7f1b4a7ff2c8f111b9362a8bbfeaec66fbba Started:0xc00404d520}] QOSClass:BestEffort EphemeralContainerStatuses:[]}
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789
------------------------------
{"msg":"FAILED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":47,"skipped":350,"failed":7,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
STEP: Destroying namespace "services-1797" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":48,"skipped":351,"failed":7,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":42,"skipped":241,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  6 06:42:12.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e27428f-7a51-4c35-871d-8a58a769ab8f" in namespace "downward-api-2844" to be "Succeeded or Failed"
Jul  6 06:42:13.075: INFO: Pod "downwardapi-volume-1e27428f-7a51-4c35-871d-8a58a769ab8f": Phase="Pending", Reason="", readiness=false. Elapsed: 96.967728ms
Jul  6 06:42:15.172: INFO: Pod "downwardapi-volume-1e27428f-7a51-4c35-871d-8a58a769ab8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.194628397s
STEP: Saw pod success
Jul  6 06:42:15.173: INFO: Pod "downwardapi-volume-1e27428f-7a51-4c35-871d-8a58a769ab8f" satisfied condition "Succeeded or Failed"
Jul  6 06:42:15.270: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod downwardapi-volume-1e27428f-7a51-4c35-871d-8a58a769ab8f container client-container: <nil>
STEP: delete the pod
Jul  6 06:42:15.476: INFO: Waiting for pod downwardapi-volume-1e27428f-7a51-4c35-871d-8a58a769ab8f to disappear
Jul  6 06:42:15.573: INFO: Pod downwardapi-volume-1e27428f-7a51-4c35-871d-8a58a769ab8f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:42:15.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2844" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":355,"failed":7,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:42:15.778: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  6 06:42:16.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4971" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":43,"skipped":243,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  6 06:42:16.981: INFO: Only supported for providers [vsphere] (not aws)
... skipping 70 lines ...
Jul  6 06:42:17.313: INFO: PersistentVolumeClaim pvc-l546r found but phase is Pending instead of Bound.
Jul  6 06:42:19.410: INFO: PersistentVolumeClaim pvc-l546r found and phase=Bound (8.484984683s)
Jul  6 06:42:19.410: INFO: Waiting up to 3m0s for PersistentVolume local-nlc6t to have phase Bound
Jul  6 06:42:19.507: INFO: PersistentVolume local-nlc6t found and phase=Bound (96.381961ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8pfr
STEP: Creating a pod to test subpath
Jul  6 06:42:19.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8pfr" in namespace "provisioning-4379" to be "Succeeded or Failed"
Jul  6 06:42:19.895: INFO: Pod "pod-subpath-test-preprovisionedpv-8pfr": Phase="Pending", Reason="", readiness=false. Elapsed: 96.574541ms
Jul  6 06:42:21.999: INFO: Pod "pod-subpath-test-preprovisionedpv-8pfr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.200094678s
STEP: Saw pod success
Jul  6 06:42:21.999: INFO: Pod "pod-subpath-test-preprovisionedpv-8pfr" satisfied condition "Succeeded or Failed"
Jul  6 06:42:22.095: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-preprovisionedpv-8pfr container test-container-subpath-preprovisionedpv-8pfr: <nil>
STEP: delete the pod
Jul  6 06:42:22.312: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8pfr to disappear
Jul  6 06:42:22.409: INFO: Pod pod-subpath-test-preprovisionedpv-8pfr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8pfr
Jul  6 06:42:22.409: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8pfr" in namespace "provisioning-4379"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":24,"skipped":195,"failed":1,"failures":["[sig-network] Services should create endpoints for unready pods"]}
Jul  6 06:42:23.751: INFO: Running AfterSuite actions on all nodes
Jul  6 06:42:23.751: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:42:23.751: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:42:23.751: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:42:23.751: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:42:23.751: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 43 lines ...
Jul  6 06:40:40.449: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-6456 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.65.122.235:80 2>&1 || true; echo; done'
Jul  6 06:42:15.629: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.65.122.235:80\n+ echo\n"
Jul  6 06:42:15.629: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nservice-proxy-toggled-xw2nf\nwget: download timed out\n\nservice-proxy-toggled-xw2nf\n"
Jul  6 06:42:15.629: INFO: Unable to reach the following endpoints of service 100.65.122.235: map[service-proxy-toggled-58c56:{} service-proxy-toggled-gjcgw:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-6456
STEP: Deleting pod verify-service-up-exec-pod-8m52f in namespace services-6456
Jul  6 06:42:21.055: FAIL: Unexpected error:
    <*errors.errorString | 0xc003c0c080>: {
        s: "service verification failed for: 100.65.122.235\nexpected [service-proxy-toggled-58c56 service-proxy-toggled-gjcgw service-proxy-toggled-xw2nf]\nreceived [service-proxy-toggled-xw2nf wget: download timed out]",
    }
    service verification failed for: 100.65.122.235
    expected [service-proxy-toggled-58c56 service-proxy-toggled-gjcgw service-proxy-toggled-xw2nf]
    received [service-proxy-toggled-xw2nf wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.28()
... skipping 324 lines ...
• Failure [328.512 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/service-proxy-name [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1883

  Jul  6 06:42:21.055: Unexpected error:
      <*errors.errorString | 0xc003c0c080>: {
          s: "service verification failed for: 100.65.122.235\nexpected [service-proxy-toggled-58c56 service-proxy-toggled-gjcgw service-proxy-toggled-xw2nf]\nreceived [service-proxy-toggled-xw2nf wget: download timed out]",
      }
      service verification failed for: 100.65.122.235
      expected [service-proxy-toggled-58c56 service-proxy-toggled-gjcgw service-proxy-toggled-xw2nf]
      received [service-proxy-toggled-xw2nf wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1907
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":12,"skipped":114,"failed":3,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name"]}
Jul  6 06:42:25.544: INFO: Running AfterSuite actions on all nodes
Jul  6 06:42:25.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:42:25.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:42:25.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:42:25.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:42:25.544: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 24 lines ...
Jul  6 06:39:25.582: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2285.svc.cluster.local from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: the server is currently unable to handle the request (get pods dns-test-924ecab1-79a1-450c-805d-c2e62538ae63)
Jul  6 06:39:55.682: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-2285.svc.cluster.local from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: the server is currently unable to handle the request (get pods dns-test-924ecab1-79a1-450c-805d-c2e62538ae63)
Jul  6 06:40:25.780: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-2285.svc.cluster.local from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: the server is currently unable to handle the request (get pods dns-test-924ecab1-79a1-450c-805d-c2e62538ae63)
Jul  6 06:40:55.880: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: the server is currently unable to handle the request (get pods dns-test-924ecab1-79a1-450c-805d-c2e62538ae63)
Jul  6 06:41:25.979: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: the server is currently unable to handle the request (get pods dns-test-924ecab1-79a1-450c-805d-c2e62538ae63)
Jul  6 06:41:56.078: INFO: Unable to read 100.64.25.203_udp@PTR from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: the server is currently unable to handle the request (get pods dns-test-924ecab1-79a1-450c-805d-c2e62538ae63)
Jul  6 06:42:25.188: FAIL: Unable to read 100.64.25.203_tcp@PTR from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2285/pods/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63/proxy/results/100.64.25.203_tcp@PTR": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f968d7d1108, 0x18, 0xc003bbf290)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc004b8b290, 0x29e9900, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
testing.tRunner(0xc00046fe00, 0x71cf618)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0706 06:42:25.189239   12535 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  6 06:42:25.188: Unable to read 100.64.25.203_tcp@PTR from pod dns-2285/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-2285/pods/dns-test-924ecab1-79a1-450c-805d-c2e62538ae63/proxy/results/100.64.25.203_tcp@PTR\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f968d7d1108, 0x18, 0xc003bbf290)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc004b8b290, 0x29e9900, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x780f3c8, 0xc00005e058, 0xc003bbf201, 0xc003bbf290, 0xc004b8b290, 0x67ba9a0, 0xc004b8b290)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x780f3c8, 0xc00005e058, 0x12a05f200, 0x8bb2c97000, 0xc004b8b290, 0x6cf83e0, 0x24f8401)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc000d63810, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc001693680, 0x14, 0x18, 0x6fb5f5e, 0x7, 0xc0048d4c00, 0x78a18a8, 0xc001c98b00, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00116c6e0, 0xc0048d4c00, 0xc001693680, 0x14, 0x18)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.5()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xe85\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc00046fe00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc00046fe00)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b\ntesting.tRunner(0xc00046fe00, 0x71cf618)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6b4ac20, 0xc003e98380)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6b4ac20, 0xc003e98380)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000858160, 0x143, 0x87cadfb, 0x7d, 0xd9, 0xc000aa2400, 0xa8c)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x628e540, 0x76c5570)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc000858160, 0x143, 0xc0005e35e0, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000858160, 0x143, 0xc0005e36c8, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x7059d05, 0x24, 0xc0005e3928, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f968d7d1108, 0x18, 0xc003bbf290)
... skipping 365 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:474
    that expects a client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":50,"skipped":359,"failed":7,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]"]}
Jul  6 06:42:30.073: INFO: Running AfterSuite actions on all nodes
Jul  6 06:42:30.073: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:42:30.074: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:42:30.074: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:42:30.074: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:42:30.074: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 107 lines ...
• [SLOW TEST:56.633 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:133
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":28,"skipped":257,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]}
Jul  6 06:42:39.324: INFO: Running AfterSuite actions on all nodes
Jul  6 06:42:39.324: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:42:39.324: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:42:39.324: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:42:39.324: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:42:39.324: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 24 lines ...
Jul  6 06:39:43.949: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6187.svc.cluster.local from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: the server is currently unable to handle the request (get pods dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce)
Jul  6 06:40:14.045: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: the server is currently unable to handle the request (get pods dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce)
Jul  6 06:40:44.143: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: the server is currently unable to handle the request (get pods dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce)
Jul  6 06:41:14.240: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6187.svc.cluster.local from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: the server is currently unable to handle the request (get pods dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce)
Jul  6 06:41:44.340: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6187.svc.cluster.local from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: the server is currently unable to handle the request (get pods dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce)
Jul  6 06:42:14.438: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6187.svc.cluster.local from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: the server is currently unable to handle the request (get pods dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce)
Jul  6 06:42:43.552: FAIL: Unable to read jessie_tcp@dns-test-service-2.dns-6187.svc.cluster.local from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-6187/pods/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce/proxy/results/jessie_tcp@dns-test-service-2.dns-6187.svc.cluster.local": context deadline exceeded

Full Stack Trace
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f9da3373a68, 0x18, 0xc001a7e210)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc00001e8a0, 0x29e9900, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f
... skipping 17 lines ...
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b
testing.tRunner(0xc0003c8300, 0x71cf618)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3
E0706 06:42:43.553502   12577 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jul  6 06:42:43.552: Unable to read jessie_tcp@dns-test-service-2.dns-6187.svc.cluster.local from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: Get \"https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-6187/pods/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce/proxy/results/jessie_tcp@dns-test-service-2.dns-6187.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:217, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f9da3373a68, 0x18, 0xc001a7e210)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x780f3c8, 0xc00005e058, 0xc00001e8a0, 0x29e9900, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x780f3c8, 0xc00005e058, 0xc001a7e201, 0xc001a7e210, 0xc00001e8a0, 0x67ba9a0, 0xc00001e8a0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:577 +0xe5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x780f3c8, 0xc00005e058, 0x12a05f200, 0x8bb2c97000, 0xc00001e8a0, 0x6cf83e0, 0x24f8401)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc0021424d0, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc0025a2900, 0xc, 0x10, 0x6fb5f5e, 0x7, 0xc001509000, 0x78a18a8, 0xc00359de40, 0x0, 0x0, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463 +0x13c\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:457\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc0001eab00, 0xc001509000, 0xc0025a2900, 0xc, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:520 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.8()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:322 +0xb2f\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc0003c8300)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc0003c8300)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:134 +0x2b\ntesting.tRunner(0xc0003c8300, 0x71cf618)\n\t/usr/local/go/src/testing/testing.go:1193 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1238 +0x2b3"} (
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

... skipping 5 lines ...
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6b4ac20, 0xc003176100)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86
panic(0x6b4ac20, 0xc003176100)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0000fc340, 0x189, 0x87cadfb, 0x7d, 0xd9, 0xc000297c00, 0xa8a)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5
panic(0x628e540, 0x76c5570)
	/usr/local/go/src/runtime/panic.go:965 +0x1b9
k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0000fc340, 0x189, 0xc000b79648, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8
k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0000fc340, 0x189, 0xc000b79730, 0x1, 0x1)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5
k8s.io/kubernetes/test/e2e/framework.Failf(0x7059d05, 0x24, 0xc000b79990, 0x4, 0x4)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219
k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:480 +0xab1
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x780f3c8, 0xc00005e058, 0x7f9da3373a68, 0x18, 0xc001a7e210)
... skipping 291 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:42:43.552: Unable to read jessie_tcp@dns-test-service-2.dns-6187.svc.cluster.local from pod dns-6187/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce: Get "https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io/api/v1/namespaces/dns-6187/pods/dns-test-4ef03f62-2a2e-43d7-bf63-2625f1feffce/proxy/results/jessie_tcp@dns-test-service-2.dns-6187.svc.cluster.local": context deadline exceeded

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":15,"skipped":134,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]"]}
Jul  6 06:42:47.810: INFO: Running AfterSuite actions on all nodes
Jul  6 06:42:47.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:42:47.810: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:42:47.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:42:47.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:42:47.811: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 45 lines ...
Jul  6 06:41:34.757: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-9766
Jul  6 06:41:34.855: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-9766
Jul  6 06:41:34.952: INFO: creating *v1.StatefulSet: csi-mock-volumes-9766-778/csi-mockplugin
Jul  6 06:41:35.051: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-9766
Jul  6 06:41:35.150: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-9766"
Jul  6 06:41:35.247: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-9766 to register on node ip-172-20-36-135.eu-west-2.compute.internal
I0706 06:41:38.491799   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0706 06:41:38.590854   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9766","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0706 06:41:38.687961   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}},{"Type":{"Service":{"type":2}}}]},"Error":"","FullError":null}
I0706 06:41:38.785676   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0706 06:41:38.995991   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-9766","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0706 06:41:39.733469   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-9766","accessible_topology":{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}},"Error":"","FullError":null}
STEP: Creating pod
Jul  6 06:41:40.776: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
I0706 06:41:40.990930   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-82517b8a-05ea-4f1e-9561-b0a47e141505","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
I0706 06:41:43.964815   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-82517b8a-05ea-4f1e-9561-b0a47e141505","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}],"accessibility_requirements":{"requisite":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}],"preferred":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-82517b8a-05ea-4f1e-9561-b0a47e141505"},"accessible_topology":[{"segments":{"io.kubernetes.storage.mock/node":"some-mock-node"}}]}},"Error":"","FullError":null}
I0706 06:41:45.289955   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  6 06:41:45.386: INFO: >>> kubeConfig: /root/.kube/config
I0706 06:41:46.079119   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-82517b8a-05ea-4f1e-9561-b0a47e141505/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-82517b8a-05ea-4f1e-9561-b0a47e141505","storage.kubernetes.io/csiProvisionerIdentity":"1625553698832-8081-csi-mock-csi-mock-volumes-9766"}},"Response":{},"Error":"","FullError":null}
I0706 06:41:46.303402   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  6 06:41:46.400: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:41:47.074: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:41:47.752: INFO: >>> kubeConfig: /root/.kube/config
I0706 06:41:48.420528   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-82517b8a-05ea-4f1e-9561-b0a47e141505/globalmount","target_path":"/var/lib/kubelet/pods/7f1b543d-345a-4a0c-ad00-5394a036b005/volumes/kubernetes.io~csi/pvc-82517b8a-05ea-4f1e-9561-b0a47e141505/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-82517b8a-05ea-4f1e-9561-b0a47e141505","storage.kubernetes.io/csiProvisionerIdentity":"1625553698832-8081-csi-mock-csi-mock-volumes-9766"}},"Response":{},"Error":"","FullError":null}
Jul  6 06:41:51.179: INFO: Deleting pod "pvc-volume-tester-q4qxk" in namespace "csi-mock-volumes-9766"
Jul  6 06:41:51.277: INFO: Wait up to 5m0s for pod "pvc-volume-tester-q4qxk" to be fully deleted
Jul  6 06:41:52.948: INFO: >>> kubeConfig: /root/.kube/config
I0706 06:41:53.655445   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/7f1b543d-345a-4a0c-ad00-5394a036b005/volumes/kubernetes.io~csi/pvc-82517b8a-05ea-4f1e-9561-b0a47e141505/mount"},"Response":{},"Error":"","FullError":null}
I0706 06:41:53.762515   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0706 06:41:53.860366   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-82517b8a-05ea-4f1e-9561-b0a47e141505/globalmount"},"Response":{},"Error":"","FullError":null}
I0706 06:42:05.588032   12381 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul  6 06:42:06.573: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pbj4h", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9766", SelfLink:"", UID:"82517b8a-05ea-4f1e-9561-b0a47e141505", ResourceVersion:"44725", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150500, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002f95158), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002f95170), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc00445cff0), VolumeMode:(*v1.PersistentVolumeMode)(0xc00445d000), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:06.573: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pbj4h", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9766", SelfLink:"", UID:"82517b8a-05ea-4f1e-9561-b0a47e141505", ResourceVersion:"44729", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150500, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.kubernetes.io/selected-node":"ip-172-20-36-135.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003997740), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003997758), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003997770), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003997788), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0044a6930), VolumeMode:(*v1.PersistentVolumeMode)(0xc0044a6940), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:06.573: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pbj4h", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9766", SelfLink:"", UID:"82517b8a-05ea-4f1e-9561-b0a47e141505", ResourceVersion:"44730", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150500, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9766", "volume.kubernetes.io/selected-node":"ip-172-20-36-135.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003924168), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003924180), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003924198), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039241b0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0039241c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0039241e0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0045ffb90), VolumeMode:(*v1.PersistentVolumeMode)(0xc0045ffba0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:06.573: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pbj4h", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9766", SelfLink:"", UID:"82517b8a-05ea-4f1e-9561-b0a47e141505", ResourceVersion:"44733", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150500, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9766"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004856a98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004856ab0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004856ac8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004856ae0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004856af8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004856b10), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000cc49e0), VolumeMode:(*v1.PersistentVolumeMode)(0xc000cc49f0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:06.573: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-pbj4h", GenerateName:"pvc-", Namespace:"csi-mock-volumes-9766", SelfLink:"", UID:"82517b8a-05ea-4f1e-9561-b0a47e141505", ResourceVersion:"44793", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150500, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-9766", "volume.kubernetes.io/selected-node":"ip-172-20-36-135.eu-west-2.compute.internal"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004856b40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004856b58), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004856b70), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004856b88), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-scheduler", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc004856ba0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc004856bb8), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc000cc4a20), VolumeMode:(*v1.PersistentVolumeMode)(0xc000cc4a30), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 51 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    exhausted, late binding, with topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology","total":-1,"completed":20,"skipped":134,"failed":4,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","[sig-network] DNS should support configurable pod resolv.conf","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]"]}
Jul  6 06:43:08.213: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:08.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:08.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:08.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:08.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:08.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 73 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","total":-1,"completed":28,"skipped":223,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols"]}
Jul  6 06:43:16.185: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:16.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:16.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:16.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:16.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:16.185: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 13 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating replication controller my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e
Jul  6 06:40:20.247: INFO: Pod name my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e: Found 1 pods out of 1
Jul  6 06:40:20.247: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e" are running
Jul  6 06:40:22.441: INFO: Pod "my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-06 06:40:20 +0000 UTC Reason: Message:}])
Jul  6 06:40:22.441: INFO: Trying to dial the pod
Jul  6 06:40:57.732: INFO: Controller my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e: Failed to GET from replica 1 [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f]: the server is currently unable to handle the request (get pods my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.59.118", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004702a08), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003fcc100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0043e499d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 06:41:32.731: INFO: Controller my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e: Failed to GET from replica 1 [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f]: the server is currently unable to handle the request (get pods my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.59.118", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004702a08), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003fcc100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0043e499d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 06:42:07.735: INFO: Controller my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e: Failed to GET from replica 1 [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f]: the server is currently unable to handle the request (get pods my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.59.118", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004702a08), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003fcc100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0043e499d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 06:42:42.731: INFO: Controller my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e: Failed to GET from replica 1 [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f]: the server is currently unable to handle the request (get pods my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.59.118", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004702a08), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003fcc100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0043e499d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 06:43:13.021: INFO: Controller my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e: Failed to GET from replica 1 [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f]: the server is currently unable to handle the request (get pods my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e-fzv7f)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761150420, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.59.118", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc004702a08), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-16208db3-05be-429c-9743-c04c04b1a44e", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003fcc100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc0043e499d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  6 06:43:13.022: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func7.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00097ef00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 226 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:43:13.022: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65
------------------------------
{"msg":"FAILED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":34,"skipped":319,"failed":4,"failures":["[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]"]}
Jul  6 06:43:16.991: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:16.991: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:16.991: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:16.991: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:16.991: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:16.991: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 16 lines ...
Jul  6 06:41:06.859: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-8284pdckg
STEP: creating a claim
Jul  6 06:41:06.957: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-qgf9
STEP: Creating a pod to test subpath
Jul  6 06:41:07.253: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-qgf9" in namespace "provisioning-8284" to be "Succeeded or Failed"
Jul  6 06:41:07.350: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 97.338336ms
Jul  6 06:41:09.449: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195792889s
Jul  6 06:41:11.547: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294677734s
Jul  6 06:41:13.650: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397068071s
Jul  6 06:41:15.748: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.495097934s
Jul  6 06:41:17.846: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.593016419s
... skipping 20 lines ...
Jul  6 06:42:01.921: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 54.668319048s
Jul  6 06:42:04.020: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 56.766954203s
Jul  6 06:42:06.117: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 58.864647706s
Jul  6 06:42:08.218: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.964961259s
Jul  6 06:42:10.315: INFO: Pod "pod-subpath-test-dynamicpv-qgf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m3.06260977s
STEP: Saw pod success
Jul  6 06:42:10.315: INFO: Pod "pod-subpath-test-dynamicpv-qgf9" satisfied condition "Succeeded or Failed"
Jul  6 06:42:10.413: INFO: Trying to get logs from node ip-172-20-32-57.eu-west-2.compute.internal pod pod-subpath-test-dynamicpv-qgf9 container test-container-subpath-dynamicpv-qgf9: <nil>
STEP: delete the pod
Jul  6 06:42:10.622: INFO: Waiting for pod pod-subpath-test-dynamicpv-qgf9 to disappear
Jul  6 06:42:10.719: INFO: Pod pod-subpath-test-dynamicpv-qgf9 no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-qgf9
Jul  6 06:42:10.719: INFO: Deleting pod "pod-subpath-test-dynamicpv-qgf9" in namespace "provisioning-8284"
... skipping 30 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":33,"skipped":330,"failed":6,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted."]}
Jul  6 06:43:17.892: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:17.892: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:17.892: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:17.892: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:17.892: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:17.892: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 41 lines ...
Jul  6 06:41:39.649: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2009 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.66.153.33:80 2>&1 || true; echo; done'
Jul  6 06:43:20.825: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.66.153.33:80\n+ true\n+ echo\n"
Jul  6 06:43:20.825: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-kn6dc\nwget: download timed out\n\nwget: download timed out\n\n"
Jul  6 06:43:20.826: INFO: Unable to reach the following endpoints of service 100.66.153.33: map[service-headless-toggled-4vcbs:{} service-headless-toggled-zdvgk:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-2009
STEP: Deleting pod verify-service-up-exec-pod-nlk98 in namespace services-2009
Jul  6 06:43:26.244: FAIL: Unexpected error:
    <*errors.errorString | 0xc0030d8060>: {
        s: "service verification failed for: 100.66.153.33\nexpected [service-headless-toggled-4vcbs service-headless-toggled-kn6dc service-headless-toggled-zdvgk]\nreceived [service-headless-toggled-kn6dc wget: download timed out]",
    }
    service verification failed for: 100.66.153.33
    expected [service-headless-toggled-4vcbs service-headless-toggled-kn6dc service-headless-toggled-zdvgk]
    received [service-headless-toggled-kn6dc wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.29()
... skipping 259 lines ...
• Failure [337.062 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1934

  Jul  6 06:43:26.244: Unexpected error:
      <*errors.errorString | 0xc0030d8060>: {
          s: "service verification failed for: 100.66.153.33\nexpected [service-headless-toggled-4vcbs service-headless-toggled-kn6dc service-headless-toggled-zdvgk]\nreceived [service-headless-toggled-kn6dc wget: download timed out]",
      }
      service verification failed for: 100.66.153.33
      expected [service-headless-toggled-4vcbs service-headless-toggled-kn6dc service-headless-toggled-zdvgk]
      received [service-headless-toggled-kn6dc wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1959
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":22,"skipped":211,"failed":3,"failures":["[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should implement service.kubernetes.io/headless"]}
Jul  6 06:43:30.387: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:30.387: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:30.387: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:30.387: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:30.387: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:30.387: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 45 lines ...
Jul  6 06:42:06.098: INFO: creating *v1.ClusterRoleBinding: csi-controller-snapshotter-role-csi-mock-volumes-7220
Jul  6 06:42:06.197: INFO: creating *v1.StorageClass: csi-mock-sc-csi-mock-volumes-7220
Jul  6 06:42:06.294: INFO: creating *v1.StatefulSet: csi-mock-volumes-7220-8155/csi-mockplugin
Jul  6 06:42:06.392: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-7220
Jul  6 06:42:06.490: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-7220"
Jul  6 06:42:06.586: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-7220 to register on node ip-172-20-36-135.eu-west-2.compute.internal
I0706 06:42:09.874990   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":"","FullError":null}
I0706 06:42:09.974107   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7220","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0706 06:42:10.073655   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":"","FullError":null}
I0706 06:42:10.173586   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":10}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":12}}},{"Type":{"Rpc":{"type":11}}},{"Type":{"Rpc":{"type":9}}}]},"Error":"","FullError":null}
I0706 06:42:10.381803   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7220","vendor_version":"0.3.0","manifest":{"url":"https://k8s.io/kubernetes/test/e2e/storage/drivers/csi-test/mock"}},"Error":"","FullError":null}
I0706 06:42:10.893781   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7220"},"Error":"","FullError":null}
STEP: Creating pod
Jul  6 06:42:16.668: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  6 06:42:16.767: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-f6mbm] to have phase Bound
I0706 06:42:16.815238   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":null,"Error":"rpc error: code = ResourceExhausted desc = fake error","FullError":{"code":8,"message":"fake error"}}
Jul  6 06:42:16.863: INFO: PersistentVolumeClaim pvc-f6mbm found but phase is Pending instead of Bound.
I0706 06:42:16.940139   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf"}}},"Error":"","FullError":null}
Jul  6 06:42:18.966: INFO: PersistentVolumeClaim pvc-f6mbm found and phase=Bound (2.198137715s)
I0706 06:42:20.135898   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  6 06:42:20.233: INFO: >>> kubeConfig: /root/.kube/config
I0706 06:42:20.894407   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf","storage.kubernetes.io/csiProvisionerIdentity":"1625553730219-8081-csi-mock-csi-mock-volumes-7220"}},"Response":{},"Error":"","FullError":null}
I0706 06:42:20.993803   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
Jul  6 06:42:21.094: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:42:21.799: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:42:22.469: INFO: >>> kubeConfig: /root/.kube/config
I0706 06:42:23.136779   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf/globalmount","target_path":"/var/lib/kubelet/pods/93ed410b-ce89-4a51-b7c4-3a94df31ed4f/volumes/kubernetes.io~csi/pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf","storage.kubernetes.io/csiProvisionerIdentity":"1625553730219-8081-csi-mock-csi-mock-volumes-7220"}},"Response":{},"Error":"","FullError":null}
Jul  6 06:42:27.450: INFO: Deleting pod "pvc-volume-tester-k2ppd" in namespace "csi-mock-volumes-7220"
Jul  6 06:42:27.548: INFO: Wait up to 5m0s for pod "pvc-volume-tester-k2ppd" to be fully deleted
Jul  6 06:42:28.824: INFO: >>> kubeConfig: /root/.kube/config
I0706 06:42:29.502464   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/93ed410b-ce89-4a51-b7c4-3a94df31ed4f/volumes/kubernetes.io~csi/pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf/mount"},"Response":{},"Error":"","FullError":null}
I0706 06:42:29.626315   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":4}}}]},"Error":"","FullError":null}
I0706 06:42:29.724197   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf/globalmount"},"Response":{},"Error":"","FullError":null}
I0706 06:42:33.852939   12567 csi.go:429] gRPCCall: {"Method":"/csi.v1.Controller/DeleteVolume","Request":{"volume_id":"4"},"Response":{},"Error":"","FullError":null}
STEP: Checking PVC events
Jul  6 06:42:34.839: INFO: PVC event ADDED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-f6mbm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7220", SelfLink:"", UID:"9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", ResourceVersion:"46188", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150536, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003f7ca98), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003f7cab0), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc0027cdbf0), VolumeMode:(*v1.PersistentVolumeMode)(0xc0027cdc00), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:34.839: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-f6mbm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7220", SelfLink:"", UID:"9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", ResourceVersion:"46189", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150536, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7220"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002754030), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002754048), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002754060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002754090), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"", StorageClassName:(*string)(0xc002884020), VolumeMode:(*v1.PersistentVolumeMode)(0xc002884030), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:34.839: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-f6mbm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7220", SelfLink:"", UID:"9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", ResourceVersion:"46194", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150536, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7220"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c3f0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c408), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c438), Subresource:""}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", StorageClassName:(*string)(0xc002b29310), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b29320), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Pending", AccessModes:[]v1.PersistentVolumeAccessMode(nil), Capacity:v1.ResourceList(nil), Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:34.839: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-f6mbm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7220", SelfLink:"", UID:"9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", ResourceVersion:"46196", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150536, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7220"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c468), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c480), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c498), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c4b0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c4c8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c4e0), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", StorageClassName:(*string)(0xc002b29350), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b29360), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
Jul  6 06:42:34.840: INFO: PVC event MODIFIED: &v1.PersistentVolumeClaim{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pvc-f6mbm", GenerateName:"pvc-", Namespace:"csi-mock-volumes-7220", SelfLink:"", UID:"9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", ResourceVersion:"46819", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761150536, loc:(*time.Location)(0x9f895a0)}}, DeletionTimestamp:(*v1.Time)(0xc00390c570), DeletionGracePeriodSeconds:(*int64)(0xc0018b84f8), Labels:map[string]string(nil), Annotations:map[string]string{"pv.kubernetes.io/bind-completed":"yes", "pv.kubernetes.io/bound-by-controller":"yes", "volume.beta.kubernetes.io/storage-provisioner":"csi-mock-csi-mock-volumes-7220"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string{"kubernetes.io/pvc-protection"}, ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c588), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c5a0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c5b8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c5d0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00390c5e8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00390c600), Subresource:"status"}}}, Spec:v1.PersistentVolumeClaimSpec{AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Selector:(*v1.LabelSelector)(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}}, VolumeName:"pvc-9c80299b-6360-4dc4-98b0-dd6b09ca5bbf", StorageClassName:(*string)(0xc002b293a0), VolumeMode:(*v1.PersistentVolumeMode)(0xc002b293b0), DataSource:(*v1.TypedLocalObjectReference)(nil)}, Status:v1.PersistentVolumeClaimStatus{Phase:"Bound", AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}}, Conditions:[]v1.PersistentVolumeClaimCondition(nil)}}
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    exhausted, immediate binding
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity exhausted, immediate binding","total":-1,"completed":19,"skipped":128,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access"]}
Jul  6 06:43:36.336: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:36.336: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:36.336: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:36.336: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:36.336: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:36.336: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 128 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":55,"skipped":451,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}
Jul  6 06:43:45.083: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:45.083: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:45.083: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:45.083: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:45.083: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:45.083: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 49 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":9,"skipped":79,"failed":0}
Jul  6 06:43:55.452: INFO: Running AfterSuite actions on all nodes
Jul  6 06:43:55.452: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:43:55.452: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:43:55.452: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:43:55.452: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:43:55.452: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:673
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:688
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":33,"skipped":309,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access","[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]"]}
Jul  6 06:44:22.348: INFO: Running AfterSuite actions on all nodes
Jul  6 06:44:22.348: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:44:22.348: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:44:22.348: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:44:22.348: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:44:22.348: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision a volume and schedule a pod with AllowedTopologies
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:164
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":-1,"completed":35,"skipped":277,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-network] Services should be able to up and down services"]}
Jul  6 06:44:24.279: INFO: Running AfterSuite actions on all nodes
Jul  6 06:44:24.279: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:44:24.279: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:44:24.279: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:44:24.279: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:44:24.279: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 278 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  33s   default-scheduler  Successfully assigned pod-network-test-9366/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     32s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    32s   kubelet            Created container webserver
  Normal  Started    32s   kubelet            Started container webserver

Jul  6 06:26:01.976: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.4.146&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  6 06:26:01.976: INFO: ...failed...will try again in next pass
Jul  6 06:26:01.976: INFO: Breadth first check of 100.96.2.105 on host 172.20.36.135...
Jul  6 06:26:02.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.2.105&port=8081&tries=1'] Namespace:pod-network-test-9366 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 06:26:02.074: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:26:02.742: INFO: Waiting for responses: map[]
Jul  6 06:26:02.742: INFO: reached 100.96.2.105 after 0/1 tries
Jul  6 06:26:02.742: INFO: Breadth first check of 100.96.1.179 on host 172.20.56.54...
... skipping 245 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  44s   default-scheduler  Successfully assigned pod-network-test-9366/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     43s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    43s   kubelet            Created container webserver
  Normal  Started    43s   kubelet            Started container webserver

Jul  6 06:26:12.763: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.1.179&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 06:26:12.763: INFO: ...failed...will try again in next pass
Jul  6 06:26:12.763: INFO: Breadth first check of 100.96.3.146 on host 172.20.59.118...
Jul  6 06:26:12.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.3.146&port=8081&tries=1'] Namespace:pod-network-test-9366 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 06:26:12.860: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:26:18.545: INFO: Waiting for responses: map[netserver-3:{}]
Jul  6 06:26:20.546: INFO: 
Output of kubectl describe pod pod-network-test-9366/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  54s   default-scheduler  Successfully assigned pod-network-test-9366/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     53s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    53s   kubelet            Created container webserver
  Normal  Started    53s   kubelet            Started container webserver

Jul  6 06:26:22.814: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.3.146&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Jul  6 06:26:22.814: INFO: ...failed...will try again in next pass
Jul  6 06:26:22.814: INFO: Going to retry 3 out of 4 pods....
Jul  6 06:26:22.814: INFO: Doublechecking 1 pods in host 172.20.32.57 which weren't seen the first time.
Jul  6 06:26:22.814: INFO: Now attempting to probe pod [[[ 100.96.4.146 ]]]
Jul  6 06:26:22.911: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.4.146&port=8081&tries=1'] Namespace:pod-network-test-9366 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 06:26:22.911: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:26:28.586: INFO: Waiting for responses: map[netserver-0:{}]
... skipping 377 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m55s  default-scheduler  Successfully assigned pod-network-test-9366/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     6m54s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    6m54s  kubelet            Created container webserver
  Normal  Started    6m54s  kubelet            Started container webserver

Jul  6 06:32:23.750: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.4.146&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  6 06:32:23.750: INFO: ... Done probing pod [[[ 100.96.4.146 ]]]
Jul  6 06:32:23.750: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned pod-network-test-9366/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     12m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    12m   kubelet            Created container webserver
  Normal  Started    12m   kubelet            Started container webserver

Jul  6 06:38:23.593: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.1.179&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 06:38:23.593: INFO: ... Done probing pod [[[ 100.96.1.179 ]]]
Jul  6 06:38:23.593: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  18m   default-scheduler  Successfully assigned pod-network-test-9366/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     18m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    18m   kubelet            Created container webserver
  Normal  Started    18m   kubelet            Started container webserver

Jul  6 06:44:23.467: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.3.146&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}])
Jul  6 06:44:23.467: INFO: ... Done probing pod [[[ 100.96.3.146 ]]]
Jul  6 06:44:23.467: INFO: succeeded at polling 1 out of 4 connections
Jul  6 06:44:23.467: INFO: pod polling failure summary:
Jul  6 06:44:23.467: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.4.146&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]
Jul  6 06:44:23.467: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.1.179&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Jul  6 06:44:23.467: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.2.108:9080/dial?request=hostname&protocol=udp&host=100.96.3.146&port=8081&tries=1'
retrieved map[]
expected map[netserver-3:{}]
Jul  6 06:44:23.467: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000bae600)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 200 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  6 06:44:23.467: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":58,"failed":2,"failures":["[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]"]}
Jul  6 06:44:27.377: INFO: Running AfterSuite actions on all nodes
Jul  6 06:44:27.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:44:27.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:44:27.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:44:27.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:44:27.377: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 84 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":33,"skipped":307,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod  [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]}
Jul  6 06:45:11.426: INFO: Running AfterSuite actions on all nodes
Jul  6 06:45:11.426: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:45:11.426: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:45:11.426: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:45:11.426: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:45:11.426: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 53 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":31,"skipped":245,"failed":2,"failures":["[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access "]}
Jul  6 06:45:14.857: INFO: Running AfterSuite actions on all nodes
Jul  6 06:45:14.857: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:45:14.857: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:45:14.857: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:45:14.857: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:45:14.857: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 32 lines ...
Jul  6 06:41:12.161: INFO: stderr: ""
Jul  6 06:41:12.161: INFO: stdout: "true"
Jul  6 06:41:12.161: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:41:12.527: INFO: stderr: ""
Jul  6 06:41:12.527: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:41:12.527: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:41:42.624: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:41:47.625: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:41:48.076: INFO: stderr: ""
Jul  6 06:41:48.076: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:41:48.076: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:41:48.425: INFO: stderr: ""
Jul  6 06:41:48.425: INFO: stdout: "true"
Jul  6 06:41:48.425: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:41:48.777: INFO: stderr: ""
Jul  6 06:41:48.777: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:41:48.777: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:42:18.875: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:42:23.877: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:42:24.333: INFO: stderr: ""
Jul  6 06:42:24.333: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:42:24.333: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:42:24.685: INFO: stderr: ""
Jul  6 06:42:24.685: INFO: stdout: "true"
Jul  6 06:42:24.685: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:42:25.038: INFO: stderr: ""
Jul  6 06:42:25.038: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:42:25.038: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:42:55.135: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:43:00.138: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:43:00.591: INFO: stderr: ""
Jul  6 06:43:00.591: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:43:00.591: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:43:00.945: INFO: stderr: ""
Jul  6 06:43:00.945: INFO: stdout: "true"
Jul  6 06:43:00.945: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:43:01.308: INFO: stderr: ""
Jul  6 06:43:01.308: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:43:01.308: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:43:31.405: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:43:36.406: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:43:36.878: INFO: stderr: ""
Jul  6 06:43:36.878: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:43:36.878: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:43:37.234: INFO: stderr: ""
Jul  6 06:43:37.234: INFO: stdout: "true"
Jul  6 06:43:37.234: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:43:37.598: INFO: stderr: ""
Jul  6 06:43:37.598: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:43:37.598: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:44:07.695: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:44:12.696: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:44:13.145: INFO: stderr: ""
Jul  6 06:44:13.145: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:44:13.145: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:44:13.534: INFO: stderr: ""
Jul  6 06:44:13.534: INFO: stdout: "true"
Jul  6 06:44:13.535: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:44:13.916: INFO: stderr: ""
Jul  6 06:44:13.916: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:44:13.916: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:44:44.014: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:44:49.016: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:44:49.495: INFO: stderr: ""
Jul  6 06:44:49.495: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:44:49.495: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:44:49.865: INFO: stderr: ""
Jul  6 06:44:49.865: INFO: stdout: "true"
Jul  6 06:44:49.865: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:44:50.227: INFO: stderr: ""
Jul  6 06:44:50.227: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:44:50.227: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:45:20.324: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:45:25.324: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:45:25.807: INFO: stderr: ""
Jul  6 06:45:25.807: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:45:25.807: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:45:26.157: INFO: stderr: ""
Jul  6 06:45:26.157: INFO: stdout: "true"
Jul  6 06:45:26.157: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:45:26.540: INFO: stderr: ""
Jul  6 06:45:26.540: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:45:26.540: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:45:56.637: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:46:01.638: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo'
Jul  6 06:46:02.099: INFO: stderr: ""
Jul  6 06:46:02.099: INFO: stdout: "update-demo-nautilus-p2269 update-demo-nautilus-pld5j "
Jul  6 06:46:02.099: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}'
Jul  6 06:46:02.457: INFO: stderr: ""
Jul  6 06:46:02.457: INFO: stdout: "true"
Jul  6 06:46:02.457: INFO: Running '/tmp/kubectl1830283086/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-2327 get pods update-demo-nautilus-p2269 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}'
Jul  6 06:46:02.816: INFO: stderr: ""
Jul  6 06:46:02.816: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4"
Jul  6 06:46:02.816: INFO: validating pod update-demo-nautilus-p2269
Jul  6 06:46:32.913: INFO: update-demo-nautilus-p2269 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-p2269)
Jul  6 06:46:37.913: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 +0x2b0
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000229b00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 187 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  6 06:46:37.913: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":26,"skipped":305,"failed":2,"failures":["[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}
Jul  6 06:46:43.283: INFO: Running AfterSuite actions on all nodes
Jul  6 06:46:43.283: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:46:43.283: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:46:43.283: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:46:43.283: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:46:43.283: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  6 06:46:43.283: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  6 06:46:43.283: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","total":-1,"completed":20,"skipped":163,"failed":2,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","[sig-network] Services should allow pods to hairpin back to themselves through services"]}
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  6 06:29:27.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
Jul  6 06:31:00.899: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:31:30.997: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:32:01.095: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:32:31.194: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:33:01.296: INFO: Unable to read jessie_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:33:31.394: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:33:31.394: INFO: Lookups using dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:34:06.494: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:34:36.591: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:35:06.688: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:35:36.785: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:36:06.882: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:36:36.980: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:37:07.077: INFO: Unable to read jessie_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:37:37.180: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:37:37.180: INFO: Lookups using dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:38:11.493: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:38:41.591: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:39:11.688: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:39:41.786: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:40:11.883: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:40:41.981: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:41:12.078: INFO: Unable to read jessie_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:41:42.175: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:41:42.175: INFO: Lookups using dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:42:16.501: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:42:46.598: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:43:16.695: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:43:46.794: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:44:16.891: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:44:46.989: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:45:17.088: INFO: Unable to read jessie_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:45:47.186: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:45:47.186: INFO: Lookups using dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:46:17.284: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:46:47.382: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:47:17.480: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:47:47.579: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:48:17.677: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:48:47.776: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:49:17.875: INFO: Unable to read jessie_udp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:49:47.973: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b: the server is currently unable to handle the request (get pods dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b)
Jul  6 06:49:47.973: INFO: Lookups using dns-7285/dns-test-2db0dfec-5589-4629-8cd8-e5be3ca6779b failed for: [wheezy_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-7285.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:49:47.974: FAIL: Unexpected error:
    <*errors.errorString | 0xc0002c0240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 176 lines ...
• Failure [1224.379 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:49:47.974: Unexpected error:
      <*errors.errorString | 0xc0002c0240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":163,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","[sig-network] Services should allow pods to hairpin back to themselves through services","[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]"]}
Jul  6 06:49:52.106: INFO: Running AfterSuite actions on all nodes
Jul  6 06:49:52.106: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:49:52.106: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:49:52.106: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:49:52.106: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:49:52.106: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 278 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  33s   default-scheduler  Successfully assigned pod-network-test-1807/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     33s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    33s   kubelet            Created container webserver
  Normal  Started    33s   kubelet            Started container webserver

Jul  6 06:34:44.490: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.4.214&port=8083&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  6 06:34:44.490: INFO: ...failed...will try again in next pass
Jul  6 06:34:44.490: INFO: Breadth first check of 100.96.2.172 on host 172.20.36.135...
Jul  6 06:34:44.586: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.2.172&port=8083&tries=1'] Namespace:pod-network-test-1807 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 06:34:44.586: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:34:50.286: INFO: Waiting for responses: map[netserver-1:{}]
Jul  6 06:34:52.288: INFO: 
Output of kubectl describe pod pod-network-test-1807/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  43s   default-scheduler  Successfully assigned pod-network-test-1807/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     43s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    43s   kubelet            Created container webserver
  Normal  Started    43s   kubelet            Started container webserver

Jul  6 06:34:54.586: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.2.172&port=8083&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  6 06:34:54.586: INFO: ...failed...will try again in next pass
Jul  6 06:34:54.586: INFO: Breadth first check of 100.96.1.28 on host 172.20.56.54...
Jul  6 06:34:54.683: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.1.28&port=8083&tries=1'] Namespace:pod-network-test-1807 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 06:34:54.683: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:35:00.409: INFO: Waiting for responses: map[netserver-2:{}]
Jul  6 06:35:02.410: INFO: 
Output of kubectl describe pod pod-network-test-1807/netserver-0:
... skipping 240 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  53s   default-scheduler  Successfully assigned pod-network-test-1807/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     53s   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    53s   kubelet            Created container webserver
  Normal  Started    53s   kubelet            Started container webserver

Jul  6 06:35:04.657: INFO: encountered error during dial (did not find expected responses... 
Tries 1
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.1.28&port=8083&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 06:35:04.657: INFO: ...failed...will try again in next pass
Jul  6 06:35:04.657: INFO: Breadth first check of 100.96.3.208 on host 172.20.59.118...
Jul  6 06:35:04.760: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.3.208&port=8083&tries=1'] Namespace:pod-network-test-1807 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  6 06:35:04.760: INFO: >>> kubeConfig: /root/.kube/config
Jul  6 06:35:05.483: INFO: Waiting for responses: map[]
Jul  6 06:35:05.483: INFO: reached 100.96.3.208 after 0/1 tries
Jul  6 06:35:05.483: INFO: Going to retry 3 out of 4 pods....
... skipping 382 lines ...
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m54s  default-scheduler  Successfully assigned pod-network-test-1807/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     6m54s  kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    6m54s  kubelet            Created container webserver
  Normal  Started    6m54s  kubelet            Started container webserver

Jul  6 06:41:05.784: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.4.214&port=8083&tries=1'
retrieved map[]
expected map[netserver-0:{}])
Jul  6 06:41:05.784: INFO: ... Done probing pod [[[ 100.96.4.214 ]]]
Jul  6 06:41:05.784: INFO: succeeded at polling 3 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned pod-network-test-1807/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     12m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    12m   kubelet            Created container webserver
  Normal  Started    12m   kubelet            Started container webserver

Jul  6 06:47:06.052: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.2.172&port=8083&tries=1'
retrieved map[]
expected map[netserver-1:{}])
Jul  6 06:47:06.052: INFO: ... Done probing pod [[[ 100.96.2.172 ]]]
Jul  6 06:47:06.052: INFO: succeeded at polling 2 out of 4 connections
... skipping 382 lines ...
  ----    ------     ----  ----               -------
  Normal  Scheduled  18m   default-scheduler  Successfully assigned pod-network-test-1807/netserver-3 to ip-172-20-59-118.eu-west-2.compute.internal
  Normal  Pulled     18m   kubelet            Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine
  Normal  Created    18m   kubelet            Created container webserver
  Normal  Started    18m   kubelet            Started container webserver

Jul  6 06:53:07.005: INFO: encountered error during dial (did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.1.28&port=8083&tries=1'
retrieved map[]
expected map[netserver-2:{}])
Jul  6 06:53:07.005: INFO: ... Done probing pod [[[ 100.96.1.28 ]]]
Jul  6 06:53:07.005: INFO: succeeded at polling 1 out of 4 connections
Jul  6 06:53:07.005: INFO: pod polling failure summary:
Jul  6 06:53:07.005: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.4.214&port=8083&tries=1'
retrieved map[]
expected map[netserver-0:{}]
Jul  6 06:53:07.005: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.2.172&port=8083&tries=1'
retrieved map[]
expected map[netserver-1:{}]
Jul  6 06:53:07.005: INFO: Collected error: did not find expected responses... 
Tries 46
Command curl -g -q -s 'http://100.96.3.212:9080/dial?request=hostname&protocol=http&host=100.96.1.28&port=8083&tries=1'
retrieved map[]
expected map[netserver-2:{}]
Jul  6 06:53:07.006: FAIL: failed,  3 out of 4 connections failed

Full Stack Trace
k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x69
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000229380)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 178 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance] [It]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Jul  6 06:53:07.006: failed,  3 out of 4 connections failed

    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":160,"failed":3,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]}
Jul  6 06:53:11.006: INFO: Running AfterSuite actions on all nodes
Jul  6 06:53:11.006: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:53:11.006: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:53:11.006: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:53:11.006: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:53:11.006: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 22 lines ...
Jul  6 06:39:17.338: INFO: Unable to read wheezy_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:39:47.435: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:40:17.536: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:40:47.634: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:41:17.732: INFO: Unable to read jessie_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:41:47.828: INFO: Unable to read jessie_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:41:47.829: INFO: Lookups using dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:42:22.930: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:42:53.028: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:43:23.126: INFO: Unable to read wheezy_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:43:53.224: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:44:23.321: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:44:53.418: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:45:23.515: INFO: Unable to read jessie_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:45:53.612: INFO: Unable to read jessie_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:45:53.612: INFO: Lookups using dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:46:27.926: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:46:58.023: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:47:28.121: INFO: Unable to read wheezy_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:47:58.218: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:48:28.315: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:48:58.413: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:49:28.511: INFO: Unable to read jessie_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:49:58.608: INFO: Unable to read jessie_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:49:58.608: INFO: Lookups using dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:50:32.928: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:51:03.026: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:51:33.123: INFO: Unable to read wheezy_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:52:03.221: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:52:33.317: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:53:03.418: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:53:33.515: INFO: Unable to read jessie_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:54:03.613: INFO: Unable to read jessie_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:54:03.613: INFO: Lookups using dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:54:33.710: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:55:03.808: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:55:33.906: INFO: Unable to read wheezy_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:56:04.003: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:56:34.101: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:57:04.199: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:57:34.296: INFO: Unable to read jessie_udp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:58:04.393: INFO: Unable to read jessie_tcp@PodARecord from pod dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56: the server is currently unable to handle the request (get pods dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56)
Jul  6 06:58:04.393: INFO: Lookups using dns-549/dns-test-51ac5d6f-3b30-49c8-bd71-23be4b48aa56 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:58:04.393: FAIL: Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 162 lines ...
• Failure [1224.261 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:58:04.393: Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":36,"skipped":295,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] DNS should provide DNS for the cluster  [Conformance]"]}
Jul  6 06:58:08.430: INFO: Running AfterSuite actions on all nodes
Jul  6 06:58:08.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:58:08.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:58:08.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:58:08.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:58:08.430: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 23 lines ...
Jul  6 06:40:52.323: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:41:22.420: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:41:52.516: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:42:22.612: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:42:52.709: INFO: Unable to read jessie_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:43:22.806: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:43:22.807: INFO: Lookups using dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:43:57.903: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:44:27.999: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:44:58.096: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:45:28.192: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:45:58.289: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:46:28.385: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:46:58.482: INFO: Unable to read jessie_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:47:28.579: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:47:28.579: INFO: Lookups using dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:48:02.905: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:48:33.002: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:49:03.098: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:49:33.195: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:50:03.292: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:50:33.389: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:51:03.487: INFO: Unable to read jessie_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:51:33.583: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:51:33.583: INFO: Lookups using dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:52:07.907: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:52:38.004: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:53:08.100: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:53:38.198: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:54:08.295: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:54:38.392: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:55:08.488: INFO: Unable to read jessie_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:55:38.584: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:55:38.584: INFO: Lookups using dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:56:08.681: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:56:38.779: INFO: Unable to read wheezy_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:57:08.878: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:57:38.976: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:58:09.073: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:58:39.171: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:59:09.270: INFO: Unable to read jessie_udp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:59:39.369: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41: the server is currently unable to handle the request (get pods dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41)
Jul  6 06:59:39.369: INFO: Lookups using dns-8502/dns-test-2e73d57e-54fa-4b34-ac4d-122d10349b41 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local wheezy_hosts@dns-querier-2 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-8502.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jul  6 06:59:39.369: FAIL: Unexpected error:
    <*errors.errorString | 0xc000240240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 159 lines ...
• Failure [1224.410 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  6 06:59:39.369: Unexpected error:
      <*errors.errorString | 0xc000240240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463
------------------------------
{"msg":"FAILED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":26,"skipped":235,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]}
Jul  6 06:59:43.480: INFO: Running AfterSuite actions on all nodes
Jul  6 06:59:43.480: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:59:43.480: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:59:43.480: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:59:43.480: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:59:43.480: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  6 06:59:43.480: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  6 06:59:43.480: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"FAILED [sig-network] DNS should provide DNS for services  [Conformance]","total":-1,"completed":13,"skipped":101,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-network] DNS should provide DNS for services  [Conformance]"]}
Jul  6 06:42:29.657: INFO: Running AfterSuite actions on all nodes
Jul  6 06:42:29.657: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  6 06:42:29.657: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  6 06:42:29.657: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  6 06:42:29.657: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  6 06:42:29.657: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  6 06:42:29.657: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  6 06:42:29.657: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3
Jul  6 06:59:43.516: INFO: Running AfterSuite actions on node 1
Jul  6 06:59:43.516: INFO: Dumping logs locally to: /logs/artifacts/5e7161ec-de1f-11eb-a95e-eac2a4935ad2
Jul  6 06:59:43.516: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory



Summarizing 77 Failures:

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource with pruning [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826

[Fail] [sig-network] Services [It] should serve multiport endpoints from pods  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:910

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826

[Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1187

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning [It] should provision storage with mount options 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:418

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy [It] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:250

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should deny crd creation [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059

[Fail] [sig-network] DNS [It] should resolve DNS of partial qualified names for the cluster [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should unconditionally reject operations on fail closed webhook [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1275

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath [It] should support readOnly directory specified in the volumeMount 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:183

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology [It] should provision a volume and schedule a pod with AllowedTopologies 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:180

[Fail] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs [It] create a PVC and non-pre-bound PV: test write access 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52

[Fail] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook [It] should execute prestop exec hook properly [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:79

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny pod and configmap creation [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:909

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should honor timeout [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2188

[Fail] [sig-network] Conntrack [It] should drop INVALID conntrack entries 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] should perform rolling updates and roll backs of template modifications with PVCs 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] listing mutating webhooks should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:680

[Fail] [sig-network] Services [It] should be able to change the type from NodePort to ExternalName [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1455

[Fail] [sig-node] PreStop [It] should call prestop when killing a pod  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151

[Fail] [sig-network] DNS [It] should provide DNS for ExternalName services [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1361

[Fail] [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner [It] should create and delete persistent volumes [fast] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:463

[Fail] [sig-network] Services [It] should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] patching/updating a mutating webhook should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:527

[Fail] [sig-network] Services [It] should be able to change the type from ClusterIP to ExternalName [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1411

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate pod and apply defaults after mutation [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1055

[Fail] [sig-cli] Kubectl client Update Demo [It] should create and stop a replication controller  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314

[Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1369

[Fail] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [It] should be able to convert from CR v1 to CR v2 [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499

[Fail] [sig-network] Services [It] should allow pods to hairpin back to themselves through services 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1030

[Fail] [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes [It] should store data 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:505

[Fail] [sig-network] Services [It] should be able to update service type to NodePort listening on same port number but different protocols 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1263

[Fail] [sig-network] Conntrack [It] should be able to preserve UDP traffic when server pod cycles for a ClusterIP service 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook [It] should execute poststart http hook properly [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103

[Fail] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook [It] should execute poststart exec hook properly [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:375

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny custom resource creation, update and deletion [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1749

[Fail] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] [It] should be able to convert a non homogeneous list of CRs [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:499

[Fail] [sig-network] DNS [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217

[Fail] [sig-network] Proxy version v1 [It] should proxy through a service and a pod  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate custom resource with different stored version [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1826

[Fail] [sig-network] Services [It] should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1219

[Fail] [sig-network] Services [It] should create endpoints for unready pods 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1706

[Fail] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs [It] should create a non-pre-bound PV and PVC: test write access  
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] listing validating webhooks should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] [sig-apps] Deployment [It] should not disrupt a cloud load-balancer's connectivity during rollout 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:1394

[Fail] [sig-api-machinery] Aggregator [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:406

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should be able to deny attaching pod [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:961

[Fail] [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy [It] should test that a PV becomes Available and is clean after the PVC is deleted. 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:292

[Fail] [sig-network] DNS [It] should support configurable pod resolv.conf 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:565

[Fail] [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns [It] should create 2 PVs and 4 PVCs: test write access 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238

[Fail] [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs [It] create a PV and a pre-bound PVC: test write access 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52

[Fail] [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns [It] should create 3 PVs and 3 PVCs: test write access 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:248

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] patching/updating a validating webhook should work [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432

[Fail] [sig-apps] ReplicaSet [It] should serve a basic image on each replica with a public image  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110

[Fail] [sig-network] Services [It] should serve a basic endpoint from pods  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:812

[Fail] [sig-network] Services [It] should be able to up and down services 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1049

[Fail] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] [It] should mutate configmap [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:988

[Fail] [sig-auth] ServiceAccounts [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/service_accounts.go:789

[Fail] [sig-network] Services [It] should implement service.kubernetes.io/service-proxy-name 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1907

[Fail] [sig-network] DNS [It] should provide DNS for services  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217

[Fail] [sig-network] DNS [It] should provide DNS for pods for Subdomain [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217

[Fail] [sig-apps] ReplicationController [It] should serve a basic image on each replica with a public image  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65

[Fail] [sig-network] Services [It] should implement service.kubernetes.io/headless 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1959

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:93

[Fail] [sig-cli] Kubectl client Update Demo [It] should scale a replication controller  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327

[Fail] [sig-network] DNS [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for intra-pod communication: http [NodeConformance] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82

[Fail] [sig-network] DNS [It] should provide DNS for the cluster  [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

[Fail] [sig-network] DNS [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:463

Ran 758 of 6404 Specs in 2880.356 seconds
FAIL! -- 681 Passed | 77 Failed | 0 Pending | 5646 Skipped


Ginkgo ran 1 suite in 48m13.250217435s
Test Suite Failed
F0706 06:59:43.552447   11799 tester.go:389] failed to run ginkgo tester: exit status 1
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000010001, 0xc0000520d0, 0x58, 0xc3)
	/home/prow/go/pkg/mod/k8s.io/klog/v2@v2.8.0/klog.go:1021 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x1d2a400, 0xc000000003, 0x0, 0x0, 0xc0001f20e0, 0x17db020, 0x9, 0x185, 0x0)
	/home/prow/go/pkg/mod/k8s.io/klog/v2@v2.8.0/klog.go:970 +0x191
k8s.io/klog/v2.(*loggingT).printf(0x1d2a400, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x13be3fc, 0x1f, 0xc000088000, 0x1, ...)
... skipping 1469 lines ...
route-table:rtb-03ad4f72ad0ec0a9a	ok
vpc:vpc-009387d2cf9dde909	ok
dhcp-options:dopt-01af7e9476217ac38	ok
Deleted kubectl config for e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io

Deleted cluster: "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io"
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...