This job view page is being replaced by Spyglass soon. Check out the new job view.
PRolemarkus: Enable IRSA for CCM
ResultABORTED
Tests 0 failed / 0 succeeded
Started2021-07-05 18:29
Elapsed1h9m
Revision9fc9ab06eed1e11808be3b339f268b5597042e77
Refs 11818

No Test Failures!


Error lines from build-log.txt

... skipping 501 lines ...
I0705 18:35:17.559823    4321 dumplogs.go:38] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops toolbox dump --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --dir /logs/artifacts --private-key /etc/aws-ssh/aws-ssh-private --ssh-user ubuntu
I0705 18:35:17.590940   11837 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 18:35:17.591089   11837 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 18:35:17.591097   11837 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
W0705 18:35:18.155540    4321 down.go:34] Dumping cluster logs at the start of Down() failed: exit status 1
I0705 18:35:18.155643    4321 down.go:48] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops delete cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --yes
I0705 18:35:18.180553   11847 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 18:35:18.180683   11847 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 18:35:18.180692   11847 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true

error reading cluster configuration: Cluster.kops.k8s.io "e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io" not found
I0705 18:35:18.782172    4321 http.go:37] curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip
2021/07/05 18:35:18 failed to get external ip from metadata service: http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip returned 404
I0705 18:35:18.790626    4321 http.go:37] curl https://ip.jsb.workers.dev
I0705 18:35:18.884281    4321 up.go:144] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops create cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --cloud aws --kubernetes-version https://storage.googleapis.com/kubernetes-release/release/v1.22.0-beta.0 --ssh-public-key /etc/aws-ssh/aws-ssh-public --override cluster.spec.nodePortAccess=0.0.0.0/0 --yes --image=099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210621 --channel=alpha --networking=kubenet --container-runtime=containerd --override=cluster.spec.cloudControllerManager.cloudProvider=aws --override=cluster.spec.serviceAccountIssuerDiscovery.discoveryStore=s3://k8s-kops-prow/kops-grid-scenario-aws-cloud-controller-manager-irsa/discovery --override=cluster.spec.serviceAccountIssuerDiscovery.enableAWSOIDCProvider=true --admin-access 34.70.159.231/32 --master-count 1 --master-volume-size 48 --node-count 4 --node-volume-size 48 --zones eu-central-1a --master-size c5.large
I0705 18:35:18.912090   11857 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 18:35:18.912208   11857 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 18:35:18.912214   11857 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
I0705 18:35:19.004970   11857 create_cluster.go:740] Using SSH public key: /etc/aws-ssh/aws-ssh-public
... skipping 33 lines ...
I0705 18:35:46.133896    4321 up.go:181] /home/prow/go/src/k8s.io/kops/bazel-bin/cmd/kops/linux-amd64/kops validate cluster --name e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --count 10 --wait 15m0s
I0705 18:35:46.161790   11877 featureflag.go:167] FeatureFlag "SpecOverrideFlag"=true
I0705 18:35:46.161910   11877 featureflag.go:167] FeatureFlag "AlphaAllowGCE"=true
I0705 18:35:46.161919   11877 featureflag.go:167] FeatureFlag "UseServiceAccountIAM"=true
Validating cluster e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io

W0705 18:35:47.540006   11877 validate_cluster.go:173] (will retry): unexpected error during validation: unable to resolve Kubernetes cluster API URL dns: lookup api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io on 10.63.240.10:53: no such host
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:35:57.580437   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:36:07.615694   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:36:17.664289   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:36:27.703109   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:36:37.768061   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:36:47.815015   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:36:57.847485   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:37:07.895947   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:37:17.925420   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:37:27.961399   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:37:37.994026   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:37:48.029546   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:37:58.059575   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:38:08.102128   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:38:18.145741   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:38:28.175309   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:38:38.207621   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:38:48.237954   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:38:58.269257   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:39:08.297607   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:39:18.330379   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

NODE STATUS
NAME	ROLE	READY

VALIDATION ERRORS
KIND	NAME		MESSAGE
dns	apiserver	Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0705 18:39:28.381487   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 8 lines ...
Machine	i-08c19c708dca6de76				machine "i-08c19c708dca6de76" has not yet joined cluster
Machine	i-0c9540d6fb78f4b7f				machine "i-0c9540d6fb78f4b7f" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-rfl4m	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-rfl4m" is pending
Pod	kube-system/coredns-f45c4bf76-pxqss		system-cluster-critical pod "coredns-f45c4bf76-pxqss" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-f6cbs	system-cluster-critical pod "ebs-csi-controller-566c97f85c-f6cbs" is pending

Validation Failed
W0705 18:39:41.449451   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 8 lines ...
Machine	i-08c19c708dca6de76				machine "i-08c19c708dca6de76" has not yet joined cluster
Machine	i-0c9540d6fb78f4b7f				machine "i-0c9540d6fb78f4b7f" has not yet joined cluster
Pod	kube-system/coredns-autoscaler-6f594f4c58-rfl4m	system-cluster-critical pod "coredns-autoscaler-6f594f4c58-rfl4m" is pending
Pod	kube-system/coredns-f45c4bf76-pxqss		system-cluster-critical pod "coredns-f45c4bf76-pxqss" is pending
Pod	kube-system/ebs-csi-controller-566c97f85c-f6cbs	system-cluster-critical pod "ebs-csi-controller-566c97f85c-f6cbs" is pending

Validation Failed
W0705 18:39:53.436245   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 13 lines ...
Pod	kube-system/ebs-csi-controller-566c97f85c-f6cbs	system-cluster-critical pod "ebs-csi-controller-566c97f85c-f6cbs" is pending
Pod	kube-system/ebs-csi-node-cgr92			system-node-critical pod "ebs-csi-node-cgr92" is pending
Pod	kube-system/ebs-csi-node-gsz44			system-node-critical pod "ebs-csi-node-gsz44" is pending
Pod	kube-system/ebs-csi-node-n46mv			system-node-critical pod "ebs-csi-node-n46mv" is pending
Pod	kube-system/ebs-csi-node-q6s54			system-node-critical pod "ebs-csi-node-q6s54" is pending

Validation Failed
W0705 18:40:05.539055   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 7 lines ...

VALIDATION ERRORS
KIND	NAME						MESSAGE
Pod	kube-system/ebs-csi-controller-566c97f85c-f6cbs	system-cluster-critical pod "ebs-csi-controller-566c97f85c-f6cbs" is pending
Pod	kube-system/ebs-csi-node-gsz44			system-node-critical pod "ebs-csi-node-gsz44" is pending

Validation Failed
W0705 18:40:17.495879   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 36 lines ...
ip-172-20-60-158.eu-central-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-47-191.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-47-191.eu-central-1.compute.internal" is pending

Validation Failed
W0705 18:40:53.540241   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 6 lines ...
ip-172-20-60-158.eu-central-1.compute.internal	node	True

VALIDATION ERRORS
KIND	NAME									MESSAGE
Pod	kube-system/kube-proxy-ip-172-20-36-144.eu-central-1.compute.internal	system-node-critical pod "kube-proxy-ip-172-20-36-144.eu-central-1.compute.internal" is pending

Validation Failed
W0705 18:41:05.554610   11877 validate_cluster.go:221] (will retry): cluster not yet healthy
INSTANCE GROUPS
NAME			ROLE	MACHINETYPE	MIN	MAX	SUBNETS
master-eu-central-1a	Master	c5.large	1	1	eu-central-1a
nodes-eu-central-1a	Node	t3.medium	4	4	eu-central-1a

... skipping 573 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [openstack] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1092
------------------------------
... skipping 79 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 296 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:43:58.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9381" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:43:58.927: INFO: Only supported for providers [vsphere] (not aws)
... skipping 170 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:00.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-1738" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":1,"skipped":42,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl apply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:803
    apply set/view last-applied
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:838
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:7.827 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:265
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:03.662: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 69 lines ...
W0705 18:43:56.441321   12661 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  5 18:43:56.441: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Jul  5 18:43:56.782: INFO: Waiting up to 5m0s for pod "var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8" in namespace "var-expansion-238" to be "Succeeded or Failed"
Jul  5 18:43:56.904: INFO: Pod "var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 122.006338ms
Jul  5 18:43:59.014: INFO: Pod "var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23178133s
Jul  5 18:44:01.122: INFO: Pod "var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340727047s
Jul  5 18:44:03.232: INFO: Pod "var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.449902765s
STEP: Saw pod success
Jul  5 18:44:03.232: INFO: Pod "var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8" satisfied condition "Succeeded or Failed"
Jul  5 18:44:03.344: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8 container dapi-container: <nil>
STEP: delete the pod
Jul  5 18:44:04.586: INFO: Waiting for pod var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8 to disappear
Jul  5 18:44:04.694: INFO: Pod var-expansion-3833b6ad-12f2-4c6b-8c06-5b29ec3b41f8 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 15 lines ...
W0705 18:43:57.656126   12471 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  5 18:43:57.656: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Jul  5 18:43:57.996: INFO: Waiting up to 5m0s for pod "var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99" in namespace "var-expansion-4250" to be "Succeeded or Failed"
Jul  5 18:43:58.105: INFO: Pod "var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99": Phase="Pending", Reason="", readiness=false. Elapsed: 109.086653ms
Jul  5 18:44:00.218: INFO: Pod "var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221600979s
Jul  5 18:44:02.328: INFO: Pod "var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331502091s
Jul  5 18:44:04.437: INFO: Pod "var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.441084572s
STEP: Saw pod success
Jul  5 18:44:04.437: INFO: Pod "var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99" satisfied condition "Succeeded or Failed"
Jul  5 18:44:04.546: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99 container dapi-container: <nil>
STEP: delete the pod
Jul  5 18:44:04.772: INFO: Waiting for pod var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99 to disappear
Jul  5 18:44:04.881: INFO: Pod var-expansion-941a8426-7bc8-4710-97f1-74b605e09d99 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:9.251 seconds]
[sig-node] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0705 18:43:56.799433   12629 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  5 18:43:56.799: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Jul  5 18:43:57.137: INFO: Waiting up to 5m0s for pod "client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e" in namespace "containers-3595" to be "Succeeded or Failed"
Jul  5 18:43:57.245: INFO: Pod "client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 108.68787ms
Jul  5 18:43:59.357: INFO: Pod "client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220129937s
Jul  5 18:44:01.467: INFO: Pod "client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330740616s
Jul  5 18:44:03.581: INFO: Pod "client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444070596s
Jul  5 18:44:05.692: INFO: Pod "client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.555303494s
STEP: Saw pod success
Jul  5 18:44:05.692: INFO: Pod "client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e" satisfied condition "Succeeded or Failed"
Jul  5 18:44:05.802: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e container agnhost-container: <nil>
STEP: delete the pod
Jul  5 18:44:06.037: INFO: Waiting for pod client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e to disappear
Jul  5 18:44:06.146: INFO: Pod client-containers-65df805d-df3d-4d95-bf16-e1eb26d33f8e no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.554 seconds]
[sig-node] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:06.483: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
W0705 18:43:56.492826   12650 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Jul  5 18:43:56.492: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Jul  5 18:43:56.893: INFO: Waiting up to 5m0s for pod "security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7" in namespace "security-context-951" to be "Succeeded or Failed"
Jul  5 18:43:57.005: INFO: Pod "security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7": Phase="Pending", Reason="", readiness=false. Elapsed: 112.880797ms
Jul  5 18:43:59.116: INFO: Pod "security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223132377s
Jul  5 18:44:01.225: INFO: Pod "security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332765852s
Jul  5 18:44:03.338: INFO: Pod "security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445020445s
Jul  5 18:44:05.448: INFO: Pod "security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.555210973s
STEP: Saw pod success
Jul  5 18:44:05.448: INFO: Pod "security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7" satisfied condition "Succeeded or Failed"
Jul  5 18:44:05.563: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7 container test-container: <nil>
STEP: delete the pod
Jul  5 18:44:06.266: INFO: Waiting for pod security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7 to disappear
Jul  5 18:44:06.374: INFO: Pod security-context-e48bc727-96cb-41ce-b6b4-d07d678662c7 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 33 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when running a container with a new image
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 48 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl copy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1355
    should copy a file from a running Pod
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1372
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:07.987: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 102 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:08.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2027" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:44:05.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
STEP: Creating a pod to test hostPath mode
Jul  5 18:44:05.686: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1743" to be "Succeeded or Failed"
Jul  5 18:44:05.795: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 108.913259ms
Jul  5 18:44:07.905: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218577885s
Jul  5 18:44:10.015: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329067518s
STEP: Saw pod success
Jul  5 18:44:10.015: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  5 18:44:10.124: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-host-path-test container test-container-1: <nil>
STEP: delete the pod
Jul  5 18:44:10.356: INFO: Waiting for pod pod-host-path-test to disappear
Jul  5 18:44:10.470: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.657 seconds]
[sig-storage] HostPath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should give a volume the correct mode [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:48
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:10.708: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 45 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:10.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "networkpolicies-8263" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations","total":-1,"completed":3,"skipped":21,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:15.221 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/job.go:51
------------------------------
{"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:44:06.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-e7ef209c-0b8c-4d98-a8e1-f5cd165a5981
STEP: Creating secret with name secret-projected-all-test-volume-84417268-1887-47e5-9fd6-b26c2574bbae
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  5 18:44:07.588: INFO: Waiting up to 5m0s for pod "projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab" in namespace "projected-8875" to be "Succeeded or Failed"
Jul  5 18:44:07.697: INFO: Pod "projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab": Phase="Pending", Reason="", readiness=false. Elapsed: 108.946883ms
Jul  5 18:44:09.806: INFO: Pod "projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218082816s
Jul  5 18:44:11.917: INFO: Pod "projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328800447s
STEP: Saw pod success
Jul  5 18:44:11.917: INFO: Pod "projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab" satisfied condition "Succeeded or Failed"
Jul  5 18:44:12.026: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab container projected-all-volume-test: <nil>
STEP: delete the pod
Jul  5 18:44:12.250: INFO: Waiting for pod projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab to disappear
Jul  5 18:44:12.375: INFO: Pod projected-volume-8cf1aa0d-072f-4583-9470-760cb2d949ab no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.883 seconds]
[sig-storage] Projected combined
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:12.621: INFO: Driver local doesn't support ext3 -- skipping
... skipping 34 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:13.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9943" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":3,"skipped":8,"failed":0}
[BeforeEach] [sig-storage] Volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:44:13.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 148 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should not run with an explicit root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:14.881: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 32 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":1,"skipped":20,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:44:11.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  5 18:44:11.970: INFO: Waiting up to 5m0s for pod "pod-6df2d3ad-f88b-465e-983f-187e073e4871" in namespace "emptydir-2264" to be "Succeeded or Failed"
Jul  5 18:44:12.078: INFO: Pod "pod-6df2d3ad-f88b-465e-983f-187e073e4871": Phase="Pending", Reason="", readiness=false. Elapsed: 108.771927ms
Jul  5 18:44:14.188: INFO: Pod "pod-6df2d3ad-f88b-465e-983f-187e073e4871": Phase="Running", Reason="", readiness=true. Elapsed: 2.218017449s
Jul  5 18:44:16.297: INFO: Pod "pod-6df2d3ad-f88b-465e-983f-187e073e4871": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327668851s
STEP: Saw pod success
Jul  5 18:44:16.297: INFO: Pod "pod-6df2d3ad-f88b-465e-983f-187e073e4871" satisfied condition "Succeeded or Failed"
Jul  5 18:44:16.407: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-6df2d3ad-f88b-465e-983f-187e073e4871 container test-container: <nil>
STEP: delete the pod
Jul  5 18:44:16.639: INFO: Waiting for pod pod-6df2d3ad-f88b-465e-983f-187e073e4871 to disappear
Jul  5 18:44:16.748: INFO: Pod pod-6df2d3ad-f88b-465e-983f-187e073e4871 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:5.660 seconds]
[sig-storage] EmptyDir volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:16.987: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
• [SLOW TEST:21.729 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:17.644: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 115 lines ...
• [SLOW TEST:6.799 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":2,"skipped":32,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 22 lines ...
Jul  5 18:44:14.594: INFO: PersistentVolumeClaim pvc-zqmqs found but phase is Pending instead of Bound.
Jul  5 18:44:16.704: INFO: PersistentVolumeClaim pvc-zqmqs found and phase=Bound (12.769336394s)
Jul  5 18:44:16.704: INFO: Waiting up to 3m0s for PersistentVolume local-rtcnc to have phase Bound
Jul  5 18:44:16.812: INFO: PersistentVolume local-rtcnc found and phase=Bound (107.839447ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-27bt
STEP: Creating a pod to test subpath
Jul  5 18:44:17.137: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-27bt" in namespace "provisioning-5650" to be "Succeeded or Failed"
Jul  5 18:44:17.248: INFO: Pod "pod-subpath-test-preprovisionedpv-27bt": Phase="Pending", Reason="", readiness=false. Elapsed: 111.062037ms
Jul  5 18:44:19.371: INFO: Pod "pod-subpath-test-preprovisionedpv-27bt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233847672s
Jul  5 18:44:21.481: INFO: Pod "pod-subpath-test-preprovisionedpv-27bt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343490756s
Jul  5 18:44:23.590: INFO: Pod "pod-subpath-test-preprovisionedpv-27bt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.453192015s
STEP: Saw pod success
Jul  5 18:44:23.591: INFO: Pod "pod-subpath-test-preprovisionedpv-27bt" satisfied condition "Succeeded or Failed"
Jul  5 18:44:23.700: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-27bt container test-container-subpath-preprovisionedpv-27bt: <nil>
STEP: delete the pod
Jul  5 18:44:23.932: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-27bt to disappear
Jul  5 18:44:24.041: INFO: Pod pod-subpath-test-preprovisionedpv-27bt no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-27bt
Jul  5 18:44:24.042: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-27bt" in namespace "provisioning-5650"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":1,"skipped":11,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    One pod requesting one prebound PVC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:209
      should be able to mount volume and read from pod1
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:232
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":3,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] volume on tmpfs should have the correct mode using FSGroup
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  5 18:44:22.826: INFO: Waiting up to 5m0s for pod "pod-594c7d5a-7981-4895-baa3-e664464bbe39" in namespace "emptydir-8671" to be "Succeeded or Failed"
Jul  5 18:44:22.940: INFO: Pod "pod-594c7d5a-7981-4895-baa3-e664464bbe39": Phase="Pending", Reason="", readiness=false. Elapsed: 113.479544ms
Jul  5 18:44:25.051: INFO: Pod "pod-594c7d5a-7981-4895-baa3-e664464bbe39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224874508s
Jul  5 18:44:27.163: INFO: Pod "pod-594c7d5a-7981-4895-baa3-e664464bbe39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336936245s
STEP: Saw pod success
Jul  5 18:44:27.163: INFO: Pod "pod-594c7d5a-7981-4895-baa3-e664464bbe39" satisfied condition "Succeeded or Failed"
Jul  5 18:44:27.273: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-594c7d5a-7981-4895-baa3-e664464bbe39 container test-container: <nil>
STEP: delete the pod
Jul  5 18:44:27.500: INFO: Waiting for pod pod-594c7d5a-7981-4895-baa3-e664464bbe39 to disappear
Jul  5 18:44:27.610: INFO: Pod pod-594c7d5a-7981-4895-baa3-e664464bbe39 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 6 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:48
    volume on tmpfs should have the correct mode using FSGroup
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:75
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":3,"skipped":35,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] NodeLease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:27.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-683" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":4,"skipped":8,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:27.874: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 78 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:28.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":4,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:28.341: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 94 lines ...
      Driver "local" does not provide raw block - skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:113
------------------------------
S
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:29.002: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 60 lines ...
Jul  5 18:44:15.412: INFO: PersistentVolumeClaim pvc-z68t5 found but phase is Pending instead of Bound.
Jul  5 18:44:17.523: INFO: PersistentVolumeClaim pvc-z68t5 found and phase=Bound (4.331493889s)
Jul  5 18:44:17.523: INFO: Waiting up to 3m0s for PersistentVolume local-njb2k to have phase Bound
Jul  5 18:44:17.633: INFO: PersistentVolume local-njb2k found and phase=Bound (110.02122ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-qsr6
STEP: Creating a pod to test subpath
Jul  5 18:44:17.965: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qsr6" in namespace "provisioning-9790" to be "Succeeded or Failed"
Jul  5 18:44:18.074: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6": Phase="Pending", Reason="", readiness=false. Elapsed: 108.876126ms
Jul  5 18:44:20.185: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21977476s
Jul  5 18:44:22.298: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.33261553s
STEP: Saw pod success
Jul  5 18:44:22.298: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6" satisfied condition "Succeeded or Failed"
Jul  5 18:44:22.410: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-qsr6 container test-container-subpath-preprovisionedpv-qsr6: <nil>
STEP: delete the pod
Jul  5 18:44:22.637: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qsr6 to disappear
Jul  5 18:44:22.747: INFO: Pod pod-subpath-test-preprovisionedpv-qsr6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qsr6
Jul  5 18:44:22.748: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qsr6" in namespace "provisioning-9790"
STEP: Creating pod pod-subpath-test-preprovisionedpv-qsr6
STEP: Creating a pod to test subpath
Jul  5 18:44:22.972: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-qsr6" in namespace "provisioning-9790" to be "Succeeded or Failed"
Jul  5 18:44:23.083: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6": Phase="Pending", Reason="", readiness=false. Elapsed: 110.976044ms
Jul  5 18:44:25.195: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223051735s
Jul  5 18:44:27.307: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.334538515s
STEP: Saw pod success
Jul  5 18:44:27.307: INFO: Pod "pod-subpath-test-preprovisionedpv-qsr6" satisfied condition "Succeeded or Failed"
Jul  5 18:44:27.419: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-qsr6 container test-container-subpath-preprovisionedpv-qsr6: <nil>
STEP: delete the pod
Jul  5 18:44:27.652: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-qsr6 to disappear
Jul  5 18:44:27.761: INFO: Pod pod-subpath-test-preprovisionedpv-qsr6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-qsr6
Jul  5 18:44:27.762: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-qsr6" in namespace "provisioning-9790"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directories when readOnly specified in the volumeSource
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:50
[It] new files should be created with FSGroup ownership when container is non-root
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/empty_dir.go:59
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  5 18:44:30.027: INFO: Waiting up to 5m0s for pod "pod-c32b3fdd-24f0-485a-834e-877ed7136a05" in namespace "emptydir-568" to be "Succeeded or Failed"
Jul  5 18:44:30.137: INFO: Pod "pod-c32b3fdd-24f0-485a-834e-877ed7136a05": Phase="Pending", Reason="", readiness=false. Elapsed: 110.190876ms
Jul  5 18:44:32.248: INFO: Pod "pod-c32b3fdd-24f0-485a-834e-877ed7136a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220939324s
STEP: Saw pod success
Jul  5 18:44:32.248: INFO: Pod "pod-c32b3fdd-24f0-485a-834e-877ed7136a05" satisfied condition "Succeeded or Failed"
Jul  5 18:44:32.357: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-c32b3fdd-24f0-485a-834e-877ed7136a05 container test-container: <nil>
STEP: delete the pod
Jul  5 18:44:32.582: INFO: Waiting for pod pod-c32b3fdd-24f0-485a-834e-877ed7136a05 to disappear
Jul  5 18:44:32.690: INFO: Pod pod-c32b3fdd-24f0-485a-834e-877ed7136a05 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:32.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-568" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root","total":-1,"completed":3,"skipped":19,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 94 lines ...
• [SLOW TEST:42.106 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:37.970: INFO: Only supported for providers [openstack] (not aws)
... skipping 120 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 27 lines ...
Jul  5 18:44:29.954: INFO: PersistentVolumeClaim pvc-r5lwc found but phase is Pending instead of Bound.
Jul  5 18:44:32.066: INFO: PersistentVolumeClaim pvc-r5lwc found and phase=Bound (14.91266397s)
Jul  5 18:44:32.066: INFO: Waiting up to 3m0s for PersistentVolume local-hx47p to have phase Bound
Jul  5 18:44:32.176: INFO: PersistentVolume local-hx47p found and phase=Bound (110.034025ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-jnlr
STEP: Creating a pod to test subpath
Jul  5 18:44:32.511: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jnlr" in namespace "provisioning-2416" to be "Succeeded or Failed"
Jul  5 18:44:32.621: INFO: Pod "pod-subpath-test-preprovisionedpv-jnlr": Phase="Pending", Reason="", readiness=false. Elapsed: 110.042745ms
Jul  5 18:44:34.732: INFO: Pod "pod-subpath-test-preprovisionedpv-jnlr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220539617s
Jul  5 18:44:36.843: INFO: Pod "pod-subpath-test-preprovisionedpv-jnlr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331539555s
STEP: Saw pod success
Jul  5 18:44:36.843: INFO: Pod "pod-subpath-test-preprovisionedpv-jnlr" satisfied condition "Succeeded or Failed"
Jul  5 18:44:36.954: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-jnlr container test-container-subpath-preprovisionedpv-jnlr: <nil>
STEP: delete the pod
Jul  5 18:44:37.184: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jnlr to disappear
Jul  5 18:44:37.297: INFO: Pod pod-subpath-test-preprovisionedpv-jnlr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jnlr
Jul  5 18:44:37.297: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jnlr" in namespace "provisioning-2416"
STEP: Creating pod pod-subpath-test-preprovisionedpv-jnlr
STEP: Creating a pod to test subpath
Jul  5 18:44:37.519: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-jnlr" in namespace "provisioning-2416" to be "Succeeded or Failed"
Jul  5 18:44:37.629: INFO: Pod "pod-subpath-test-preprovisionedpv-jnlr": Phase="Pending", Reason="", readiness=false. Elapsed: 110.325792ms
Jul  5 18:44:39.740: INFO: Pod "pod-subpath-test-preprovisionedpv-jnlr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220900366s
STEP: Saw pod success
Jul  5 18:44:39.740: INFO: Pod "pod-subpath-test-preprovisionedpv-jnlr" satisfied condition "Succeeded or Failed"
Jul  5 18:44:39.850: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-jnlr container test-container-subpath-preprovisionedpv-jnlr: <nil>
STEP: delete the pod
Jul  5 18:44:40.082: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-jnlr to disappear
Jul  5 18:44:40.193: INFO: Pod pod-subpath-test-preprovisionedpv-jnlr no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-jnlr
Jul  5 18:44:40.193: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-jnlr" in namespace "provisioning-2416"
... skipping 51 lines ...
Jul  5 18:44:29.170: INFO: PersistentVolumeClaim pvc-bkmhf found but phase is Pending instead of Bound.
Jul  5 18:44:31.280: INFO: PersistentVolumeClaim pvc-bkmhf found and phase=Bound (12.77047424s)
Jul  5 18:44:31.280: INFO: Waiting up to 3m0s for PersistentVolume local-mqhkt to have phase Bound
Jul  5 18:44:31.390: INFO: PersistentVolume local-mqhkt found and phase=Bound (109.287333ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-f494
STEP: Creating a pod to test subpath
Jul  5 18:44:31.721: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-f494" in namespace "provisioning-3902" to be "Succeeded or Failed"
Jul  5 18:44:31.832: INFO: Pod "pod-subpath-test-preprovisionedpv-f494": Phase="Pending", Reason="", readiness=false. Elapsed: 110.912768ms
Jul  5 18:44:33.950: INFO: Pod "pod-subpath-test-preprovisionedpv-f494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229420891s
Jul  5 18:44:36.060: INFO: Pod "pod-subpath-test-preprovisionedpv-f494": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338930002s
Jul  5 18:44:38.170: INFO: Pod "pod-subpath-test-preprovisionedpv-f494": Phase="Pending", Reason="", readiness=false. Elapsed: 6.449255177s
Jul  5 18:44:40.284: INFO: Pod "pod-subpath-test-preprovisionedpv-f494": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563542142s
Jul  5 18:44:42.394: INFO: Pod "pod-subpath-test-preprovisionedpv-f494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.67324171s
STEP: Saw pod success
Jul  5 18:44:42.394: INFO: Pod "pod-subpath-test-preprovisionedpv-f494" satisfied condition "Succeeded or Failed"
Jul  5 18:44:42.503: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-f494 container test-container-volume-preprovisionedpv-f494: <nil>
STEP: delete the pod
Jul  5 18:44:42.735: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-f494 to disappear
Jul  5 18:44:42.845: INFO: Pod pod-subpath-test-preprovisionedpv-f494 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-f494
Jul  5 18:44:42.845: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-f494" in namespace "provisioning-3902"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
Jul  5 18:44:08.570: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-mgvmv] to have phase Bound
Jul  5 18:44:08.678: INFO: PersistentVolumeClaim pvc-mgvmv found and phase=Bound (108.054898ms)
STEP: Deleting the previously created pod
Jul  5 18:44:15.227: INFO: Deleting pod "pvc-volume-tester-l76cf" in namespace "csi-mock-volumes-2490"
Jul  5 18:44:15.338: INFO: Wait up to 5m0s for pod "pvc-volume-tester-l76cf" to be fully deleted
STEP: Checking CSI driver logs
Jul  5 18:44:21.678: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/97d91c7c-cf86-4c05-bc82-7842a59dac2f/volumes/kubernetes.io~csi/pvc-61e01f53-92f5-42b9-86cd-2d28642e1e2a/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-l76cf
Jul  5 18:44:21.678: INFO: Deleting pod "pvc-volume-tester-l76cf" in namespace "csi-mock-volumes-2490"
STEP: Deleting claim pvc-mgvmv
Jul  5 18:44:22.013: INFO: Waiting up to 2m0s for PersistentVolume pvc-61e01f53-92f5-42b9-86cd-2d28642e1e2a to get deleted
Jul  5 18:44:22.123: INFO: PersistentVolume pvc-61e01f53-92f5-42b9-86cd-2d28642e1e2a was removed
STEP: Deleting storageclass csi-mock-volumes-2490-sc6rkzb
... skipping 95 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233
    should create services for rc  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":2,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:44.527: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:45.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3979" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":3,"skipped":62,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource","total":-1,"completed":4,"skipped":23,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:44:42.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/configmap_volume.go:110
STEP: Creating configMap with name configmap-test-volume-map-21d4cf28-7867-4e72-8f01-f495d1e669bf
STEP: Creating a pod to test consume configMaps
Jul  5 18:44:43.265: INFO: Waiting up to 5m0s for pod "pod-configmaps-e81580fa-c8fc-4884-881a-1cafde8ef9b9" in namespace "configmap-5969" to be "Succeeded or Failed"
Jul  5 18:44:43.375: INFO: Pod "pod-configmaps-e81580fa-c8fc-4884-881a-1cafde8ef9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 109.715093ms
Jul  5 18:44:45.485: INFO: Pod "pod-configmaps-e81580fa-c8fc-4884-881a-1cafde8ef9b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220002857s
STEP: Saw pod success
Jul  5 18:44:45.485: INFO: Pod "pod-configmaps-e81580fa-c8fc-4884-881a-1cafde8ef9b9" satisfied condition "Succeeded or Failed"
Jul  5 18:44:45.595: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-configmaps-e81580fa-c8fc-4884-881a-1cafde8ef9b9 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 18:44:45.830: INFO: Waiting for pod pod-configmaps-e81580fa-c8fc-4884-881a-1cafde8ef9b9 to disappear
Jul  5 18:44:45.939: INFO: Pod pod-configmaps-e81580fa-c8fc-4884-881a-1cafde8ef9b9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:45.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5969" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":23,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:44:45.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99
Jul  5 18:44:46.329: INFO: Waiting up to 5m0s for pod "busybox-user-0-3cdc411b-e49e-47f8-b6a4-3881559831ca" in namespace "security-context-test-5047" to be "Succeeded or Failed"
Jul  5 18:44:46.438: INFO: Pod "busybox-user-0-3cdc411b-e49e-47f8-b6a4-3881559831ca": Phase="Pending", Reason="", readiness=false. Elapsed: 109.251428ms
Jul  5 18:44:48.587: INFO: Pod "busybox-user-0-3cdc411b-e49e-47f8-b6a4-3881559831ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.257764357s
Jul  5 18:44:48.587: INFO: Pod "busybox-user-0-3cdc411b-e49e-47f8-b6a4-3881559831ca" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:48.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5047" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":63,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:48.838: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 79 lines ...
• [SLOW TEST:32.566 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:50.345: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 89 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:75
STEP: Creating configMap with name projected-configmap-test-volume-787b5f16-8acd-4f3b-8bba-4f34ff91624e
STEP: Creating a pod to test consume configMaps
Jul  5 18:44:51.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e9b9b832-9af7-4818-a198-2483448625b3" in namespace "projected-221" to be "Succeeded or Failed"
Jul  5 18:44:51.286: INFO: Pod "pod-projected-configmaps-e9b9b832-9af7-4818-a198-2483448625b3": Phase="Pending", Reason="", readiness=false. Elapsed: 111.092949ms
Jul  5 18:44:53.401: INFO: Pod "pod-projected-configmaps-e9b9b832-9af7-4818-a198-2483448625b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.225406702s
STEP: Saw pod success
Jul  5 18:44:53.401: INFO: Pod "pod-projected-configmaps-e9b9b832-9af7-4818-a198-2483448625b3" satisfied condition "Succeeded or Failed"
Jul  5 18:44:53.510: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod pod-projected-configmaps-e9b9b832-9af7-4818-a198-2483448625b3 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 18:44:53.744: INFO: Waiting for pod pod-projected-configmaps-e9b9b832-9af7-4818-a198-2483448625b3 to disappear
Jul  5 18:44:53.853: INFO: Pod pod-projected-configmaps-e9b9b832-9af7-4818-a198-2483448625b3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:53.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-221" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:54.107: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: windows-gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 47 lines ...
Jul  5 18:44:45.655: INFO: PersistentVolumeClaim pvc-qgg9k found but phase is Pending instead of Bound.
Jul  5 18:44:47.766: INFO: PersistentVolumeClaim pvc-qgg9k found and phase=Bound (14.908991898s)
Jul  5 18:44:47.766: INFO: Waiting up to 3m0s for PersistentVolume local-z67vh to have phase Bound
Jul  5 18:44:47.875: INFO: PersistentVolume local-z67vh found and phase=Bound (108.991309ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-8l94
STEP: Creating a pod to test subpath
Jul  5 18:44:48.203: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-8l94" in namespace "provisioning-6819" to be "Succeeded or Failed"
Jul  5 18:44:48.314: INFO: Pod "pod-subpath-test-preprovisionedpv-8l94": Phase="Pending", Reason="", readiness=false. Elapsed: 110.439235ms
Jul  5 18:44:50.424: INFO: Pod "pod-subpath-test-preprovisionedpv-8l94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220671862s
Jul  5 18:44:52.535: INFO: Pod "pod-subpath-test-preprovisionedpv-8l94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331306189s
STEP: Saw pod success
Jul  5 18:44:52.535: INFO: Pod "pod-subpath-test-preprovisionedpv-8l94" satisfied condition "Succeeded or Failed"
Jul  5 18:44:52.646: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-8l94 container test-container-subpath-preprovisionedpv-8l94: <nil>
STEP: delete the pod
Jul  5 18:44:52.873: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-8l94 to disappear
Jul  5 18:44:52.983: INFO: Pod pod-subpath-test-preprovisionedpv-8l94 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-8l94
Jul  5 18:44:52.983: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-8l94" in namespace "provisioning-6819"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":52,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:54.588: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Jul  5 18:44:45.882: INFO: PersistentVolumeClaim pvc-snt5b found but phase is Pending instead of Bound.
Jul  5 18:44:47.991: INFO: PersistentVolumeClaim pvc-snt5b found and phase=Bound (14.942828436s)
Jul  5 18:44:47.991: INFO: Waiting up to 3m0s for PersistentVolume local-fh2d8 to have phase Bound
Jul  5 18:44:48.100: INFO: PersistentVolume local-fh2d8 found and phase=Bound (108.963353ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-pnkq
STEP: Creating a pod to test subpath
Jul  5 18:44:48.431: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-pnkq" in namespace "provisioning-9022" to be "Succeeded or Failed"
Jul  5 18:44:48.542: INFO: Pod "pod-subpath-test-preprovisionedpv-pnkq": Phase="Pending", Reason="", readiness=false. Elapsed: 110.394747ms
Jul  5 18:44:50.662: INFO: Pod "pod-subpath-test-preprovisionedpv-pnkq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230473728s
Jul  5 18:44:52.772: INFO: Pod "pod-subpath-test-preprovisionedpv-pnkq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.341081181s
STEP: Saw pod success
Jul  5 18:44:52.772: INFO: Pod "pod-subpath-test-preprovisionedpv-pnkq" satisfied condition "Succeeded or Failed"
Jul  5 18:44:52.881: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-pnkq container test-container-subpath-preprovisionedpv-pnkq: <nil>
STEP: delete the pod
Jul  5 18:44:53.110: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-pnkq to disappear
Jul  5 18:44:53.219: INFO: Pod pod-subpath-test-preprovisionedpv-pnkq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-pnkq
Jul  5 18:44:53.219: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-pnkq" in namespace "provisioning-9022"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:54.772: INFO: Only supported for providers [vsphere] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 31 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:55.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3032" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":-1,"completed":4,"skipped":39,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Jul  5 18:44:45.477: INFO: PersistentVolumeClaim pvc-grkzt found but phase is Pending instead of Bound.
Jul  5 18:44:47.586: INFO: PersistentVolumeClaim pvc-grkzt found and phase=Bound (10.661803479s)
Jul  5 18:44:47.586: INFO: Waiting up to 3m0s for PersistentVolume local-q6t8m to have phase Bound
Jul  5 18:44:47.695: INFO: PersistentVolume local-q6t8m found and phase=Bound (108.706976ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rm7m
STEP: Creating a pod to test subpath
Jul  5 18:44:48.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rm7m" in namespace "provisioning-9462" to be "Succeeded or Failed"
Jul  5 18:44:48.132: INFO: Pod "pod-subpath-test-preprovisionedpv-rm7m": Phase="Pending", Reason="", readiness=false. Elapsed: 108.858937ms
Jul  5 18:44:50.241: INFO: Pod "pod-subpath-test-preprovisionedpv-rm7m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218750559s
Jul  5 18:44:52.351: INFO: Pod "pod-subpath-test-preprovisionedpv-rm7m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328606764s
Jul  5 18:44:54.462: INFO: Pod "pod-subpath-test-preprovisionedpv-rm7m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.439039431s
STEP: Saw pod success
Jul  5 18:44:54.462: INFO: Pod "pod-subpath-test-preprovisionedpv-rm7m" satisfied condition "Succeeded or Failed"
Jul  5 18:44:54.571: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-rm7m container test-container-volume-preprovisionedpv-rm7m: <nil>
STEP: delete the pod
Jul  5 18:44:54.809: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rm7m to disappear
Jul  5 18:44:54.921: INFO: Pod pod-subpath-test-preprovisionedpv-rm7m no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rm7m
Jul  5 18:44:54.921: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rm7m" in namespace "provisioning-9462"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":4,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:56.506: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 147 lines ...
• [SLOW TEST:63.407 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:44:59.394: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 36 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:44:59.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1016" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size","total":-1,"completed":5,"skipped":42,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-node] PrivilegedPod [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 131 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 104 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  storage capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1023
    unlimited
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1081
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume storage capacity unlimited","total":-1,"completed":3,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":3,"skipped":12,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:45:00.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
Jul  5 18:45:00.953: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-a606d9a6-2e14-4922-9e12-d3a491541767" in namespace "security-context-test-6429" to be "Succeeded or Failed"
Jul  5 18:45:01.063: INFO: Pod "alpine-nnp-nil-a606d9a6-2e14-4922-9e12-d3a491541767": Phase="Pending", Reason="", readiness=false. Elapsed: 109.468581ms
Jul  5 18:45:03.173: INFO: Pod "alpine-nnp-nil-a606d9a6-2e14-4922-9e12-d3a491541767": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220041997s
Jul  5 18:45:05.287: INFO: Pod "alpine-nnp-nil-a606d9a6-2e14-4922-9e12-d3a491541767": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.333504436s
Jul  5 18:45:05.287: INFO: Pod "alpine-nnp-nil-a606d9a6-2e14-4922-9e12-d3a491541767" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6429" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:05.643: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:45:04.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:06.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-488" for this suite.

•
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:07.095: INFO: Only supported for providers [openstack] (not aws)
... skipping 38 lines ...
STEP: Destroying namespace "services-9785" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":4,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:08.592: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 67 lines ...
STEP: Registering the webhook via the AdmissionRegistration API
Jul  5 18:44:19.984: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:44:30.306: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:44:40.605: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:44:50.911: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:45:01.136: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:45:01.137: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 469 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 18:45:01.137: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:909
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":0,"skipped":6,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:11.386: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 61 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":23,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 50 lines ...
Jul  5 18:44:30.231: INFO: PersistentVolumeClaim pvc-2rtx7 found and phase=Bound (108.68568ms)
STEP: Deleting the previously created pod
Jul  5 18:44:43.791: INFO: Deleting pod "pvc-volume-tester-428kt" in namespace "csi-mock-volumes-8433"
Jul  5 18:44:43.902: INFO: Wait up to 5m0s for pod "pvc-volume-tester-428kt" to be fully deleted
STEP: Checking CSI driver logs
Jul  5 18:44:52.237: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.tokens: {"":{"token":"eyJhbGciOiJSUzI1NiIsImtpZCI6ImdILVRuemo1R3dHZy1SdDByWkRJQWJ5V2xMTnBSWEd2ajZWbmE4dnhKRDQifQ.eyJhdWQiOlsia3ViZXJuZXRlcy5zdmMuZGVmYXVsdCJdLCJleHAiOjE2MjU1MTEyNzYsImlhdCI6MTYyNTUxMDY3NiwiaXNzIjoiaHR0cHM6Ly9rOHMta29wcy1wcm93LnMzLnVzLXdlc3QtMS5hbWF6b25hd3MuY29tL2tvcHMtZ3JpZC1zY2VuYXJpby1hd3MtY2xvdWQtY29udHJvbGxlci1tYW5hZ2VyLWlyc2EvZGlzY292ZXJ5Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjc2ktbW9jay12b2x1bWVzLTg0MzMiLCJwb2QiOnsibmFtZSI6InB2Yy12b2x1bWUtdGVzdGVyLTQyOGt0IiwidWlkIjoiNDY0ZmZjYTYtZDJlMS00Mzg0LTg4ODAtZWIyYjNkYjhkNzRlIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkZWZhdWx0IiwidWlkIjoiMWVkYmMzM2QtYTdiZS00NjYwLWFmODQtMzA2NWE1Zjg1YjZjIn19LCJuYmYiOjE2MjU1MTA2NzYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjc2ktbW9jay12b2x1bWVzLTg0MzM6ZGVmYXVsdCJ9.rmWqXSzt0bIm_-UWIdOfDyU6Q8zp1lqNeG0NMsrYJVPAhHdIZ9iMFjJ5ujwBGRY3OGlJqocm9MoHPsyr-Ifr73uD9KsTrOKIh11T4rT4GGooUadUpFtMKUMeG-68hfvKZxjmhi8W1IutW7zZN7xqM5SQFTsK8y5xz2_MBptc1p4reuJQAro5FyE_RM-asf1iVH3Uj1MOBN0tDWzEnu-XvJyA29OKYHSvyCEW5RgCtMe7OpEqobwQpK8luDLd-TLTpSQPlxj172dtt1YXntFGN0HxmoSMIOw6Xs1yCfsmQ4EG6dTWqJBfCQ2140-UoZvPP_uWDiqpBe89X3JbRvws3Q","expirationTimestamp":"2021-07-05T18:54:36Z"}}
Jul  5 18:44:52.237: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/464ffca6-d2e1-4384-8880-eb2b3db8d74e/volumes/kubernetes.io~csi/pvc-6588ffd7-e8c8-474e-96ec-bb1c706ba5bb/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-428kt
Jul  5 18:44:52.237: INFO: Deleting pod "pvc-volume-tester-428kt" in namespace "csi-mock-volumes-8433"
STEP: Deleting claim pvc-2rtx7
Jul  5 18:44:52.568: INFO: Waiting up to 2m0s for PersistentVolume pvc-6588ffd7-e8c8-474e-96ec-bb1c706ba5bb to get deleted
Jul  5 18:44:52.677: INFO: PersistentVolume pvc-6588ffd7-e8c8-474e-96ec-bb1c706ba5bb was removed
STEP: Deleting storageclass csi-mock-volumes-8433-scsnclx
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIServiceAccountToken
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1497
    token should be plumbed down when csiServiceAccountTokenEnabled=true
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1525
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true","total":-1,"completed":1,"skipped":5,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
STEP: Creating a validating webhook configuration
Jul  5 18:44:24.846: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:44:35.171: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:44:45.467: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:44:55.782: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:45:06.005: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:45:06.005: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 463 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 18:45:06.005: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:432
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":0,"skipped":7,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  5 18:45:15.693: INFO: Waiting up to 5m0s for pod "metadata-volume-dc6b995f-5df6-4fa8-bf30-4a56d068520c" in namespace "downward-api-5903" to be "Succeeded or Failed"
Jul  5 18:45:15.803: INFO: Pod "metadata-volume-dc6b995f-5df6-4fa8-bf30-4a56d068520c": Phase="Pending", Reason="", readiness=false. Elapsed: 109.445671ms
Jul  5 18:45:17.915: INFO: Pod "metadata-volume-dc6b995f-5df6-4fa8-bf30-4a56d068520c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222415547s
STEP: Saw pod success
Jul  5 18:45:17.916: INFO: Pod "metadata-volume-dc6b995f-5df6-4fa8-bf30-4a56d068520c" satisfied condition "Succeeded or Failed"
Jul  5 18:45:18.025: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod metadata-volume-dc6b995f-5df6-4fa8-bf30-4a56d068520c container client-container: <nil>
STEP: delete the pod
Jul  5 18:45:18.252: INFO: Waiting for pod metadata-volume-dc6b995f-5df6-4fa8-bf30-4a56d068520c to disappear
Jul  5 18:45:18.361: INFO: Pod metadata-volume-dc6b995f-5df6-4fa8-bf30-4a56d068520c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 9 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217
Jul  5 18:45:16.249: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-fabbb86d-aad9-49c2-bc42-8410817e34ef" in namespace "security-context-test-7420" to be "Succeeded or Failed"
Jul  5 18:45:16.359: INFO: Pod "busybox-readonly-true-fabbb86d-aad9-49c2-bc42-8410817e34ef": Phase="Pending", Reason="", readiness=false. Elapsed: 109.429633ms
Jul  5 18:45:18.471: INFO: Pod "busybox-readonly-true-fabbb86d-aad9-49c2-bc42-8410817e34ef": Phase="Failed", Reason="", readiness=false. Elapsed: 2.221689279s
Jul  5 18:45:18.471: INFO: Pod "busybox-readonly-true-fabbb86d-aad9-49c2-bc42-8410817e34ef" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:18.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7420" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":10,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:18.716: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 74 lines ...
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul  5 18:45:11.993: INFO: start=2021-07-05 18:45:06.758576651 +0000 UTC m=+94.492975522, now=2021-07-05 18:45:11.993155826 +0000 UTC m=+99.727554711, kubelet pod: {"metadata":{"name":"pod-submit-remove-7bccd1e5-fb8f-4182-ab4d-fc6743e8d887","namespace":"pods-3902","uid":"7d435b91-2ea1-47dd-a040-f9f63d482c00","resourceVersion":"4471","creationTimestamp":"2021-07-05T18:45:04Z","deletionTimestamp":"2021-07-05T18:45:36Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"980537401"},"annotations":{"kubernetes.io/config.seen":"2021-07-05T18:45:04.156698454Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-07-05T18:45:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-42fzt","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-42fzt","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-47-191.eu-central-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:04Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:08Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:08Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:04Z"}],"hostIP":"172.20.47.191","podIP":"100.96.1.20","podIPs":[{"ip":"100.96.1.20"}],"startTime":"2021-07-05T18:45:04Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-07-05T18:45:04Z","finishedAt":"2021-07-05T18:45:07Z","containerID":"containerd://ed639760320567891a98d6c94d222a378429d297c0574179e4caa5854f699a65"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://ed639760320567891a98d6c94d222a378429d297c0574179e4caa5854f699a65","started":false}],"qosClass":"BestEffort"}}
Jul  5 18:45:16.875: INFO: start=2021-07-05 18:45:06.758576651 +0000 UTC m=+94.492975522, now=2021-07-05 18:45:16.87505912 +0000 UTC m=+104.609458024, kubelet pod: {"metadata":{"name":"pod-submit-remove-7bccd1e5-fb8f-4182-ab4d-fc6743e8d887","namespace":"pods-3902","uid":"7d435b91-2ea1-47dd-a040-f9f63d482c00","resourceVersion":"4471","creationTimestamp":"2021-07-05T18:45:04Z","deletionTimestamp":"2021-07-05T18:45:36Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"980537401"},"annotations":{"kubernetes.io/config.seen":"2021-07-05T18:45:04.156698454Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-07-05T18:45:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-42fzt","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-42fzt","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ip-172-20-47-191.eu-central-1.compute.internal","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:04Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:08Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:08Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-07-05T18:45:04Z"}],"hostIP":"172.20.47.191","podIP":"100.96.1.20","podIPs":[{"ip":"100.96.1.20"}],"startTime":"2021-07-05T18:45:04Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-07-05T18:45:04Z","finishedAt":"2021-07-05T18:45:07Z","containerID":"containerd://ed639760320567891a98d6c94d222a378429d297c0574179e4caa5854f699a65"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"containerd://ed639760320567891a98d6c94d222a378429d297c0574179e4caa5854f699a65","started":false}],"qosClass":"BestEffort"}}
Jul  5 18:45:21.878: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:21.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3902" for this suite.

... skipping 3 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  Delete Grace Period
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:54
    should be submitted and removed
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:65
------------------------------
{"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":4,"skipped":9,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:22.248: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 80 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not mount / map unused volumes in a pod [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:351
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":6,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:22.293: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 65 lines ...
Jul  5 18:45:14.163: INFO: PersistentVolumeClaim pvc-zd95f found but phase is Pending instead of Bound.
Jul  5 18:45:16.280: INFO: PersistentVolumeClaim pvc-zd95f found and phase=Bound (14.895491688s)
Jul  5 18:45:16.280: INFO: Waiting up to 3m0s for PersistentVolume local-kzjzz to have phase Bound
Jul  5 18:45:16.389: INFO: PersistentVolume local-kzjzz found and phase=Bound (108.735049ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-6qdw
STEP: Creating a pod to test exec-volume-test
Jul  5 18:45:16.716: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-6qdw" in namespace "volume-6616" to be "Succeeded or Failed"
Jul  5 18:45:16.825: INFO: Pod "exec-volume-test-preprovisionedpv-6qdw": Phase="Pending", Reason="", readiness=false. Elapsed: 109.41978ms
Jul  5 18:45:18.935: INFO: Pod "exec-volume-test-preprovisionedpv-6qdw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219590921s
STEP: Saw pod success
Jul  5 18:45:18.936: INFO: Pod "exec-volume-test-preprovisionedpv-6qdw" satisfied condition "Succeeded or Failed"
Jul  5 18:45:19.045: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-6qdw container exec-container-preprovisionedpv-6qdw: <nil>
STEP: delete the pod
Jul  5 18:45:19.282: INFO: Waiting for pod exec-volume-test-preprovisionedpv-6qdw to disappear
Jul  5 18:45:19.390: INFO: Pod exec-volume-test-preprovisionedpv-6qdw no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-6qdw
Jul  5 18:45:19.390: INFO: Deleting pod "exec-volume-test-preprovisionedpv-6qdw" in namespace "volume-6616"
... skipping 24 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":42,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:22.357: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 127 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:44:44.522: INFO: >>> kubeConfig: /root/.kube/config
... skipping 47 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should be able to unmount after the subpath directory is deleted [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:444
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:45:18.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run with an explicit non-root user ID [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
Jul  5 18:45:19.264: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-4730" to be "Succeeded or Failed"
Jul  5 18:45:19.373: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 109.00876ms
Jul  5 18:45:21.482: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218454527s
Jul  5 18:45:23.593: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329671059s
Jul  5 18:45:23.594: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:23.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4730" for this suite.


... skipping 2 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  When creating a container with runAsNonRoot
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104
    should run with an explicit non-root user ID [LinuxOnly]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129
------------------------------
{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:23.955: INFO: Only supported for providers [gce gke] (not aws)
... skipping 64 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:26.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2537" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods","total":-1,"completed":5,"skipped":14,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 20 lines ...
Jul  5 18:45:14.762: INFO: PersistentVolumeClaim pvc-sxw27 found but phase is Pending instead of Bound.
Jul  5 18:45:16.872: INFO: PersistentVolumeClaim pvc-sxw27 found and phase=Bound (6.438928023s)
Jul  5 18:45:16.872: INFO: Waiting up to 3m0s for PersistentVolume local-twntx to have phase Bound
Jul  5 18:45:16.981: INFO: PersistentVolume local-twntx found and phase=Bound (109.145149ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-fnd7
STEP: Creating a pod to test subpath
Jul  5 18:45:17.312: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-fnd7" in namespace "provisioning-1388" to be "Succeeded or Failed"
Jul  5 18:45:17.433: INFO: Pod "pod-subpath-test-preprovisionedpv-fnd7": Phase="Pending", Reason="", readiness=false. Elapsed: 121.801724ms
Jul  5 18:45:19.545: INFO: Pod "pod-subpath-test-preprovisionedpv-fnd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233048155s
Jul  5 18:45:21.656: INFO: Pod "pod-subpath-test-preprovisionedpv-fnd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.344333367s
Jul  5 18:45:23.767: INFO: Pod "pod-subpath-test-preprovisionedpv-fnd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.455141018s
STEP: Saw pod success
Jul  5 18:45:23.767: INFO: Pod "pod-subpath-test-preprovisionedpv-fnd7" satisfied condition "Succeeded or Failed"
Jul  5 18:45:23.877: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-fnd7 container test-container-subpath-preprovisionedpv-fnd7: <nil>
STEP: delete the pod
Jul  5 18:45:24.120: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-fnd7 to disappear
Jul  5 18:45:24.229: INFO: Pod pod-subpath-test-preprovisionedpv-fnd7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-fnd7
Jul  5 18:45:24.229: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-fnd7" in namespace "provisioning-1388"
... skipping 35 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 18:45:24.664: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89532c11-ed5b-4977-8c4d-a02c17eb45df" in namespace "projected-6884" to be "Succeeded or Failed"
Jul  5 18:45:24.773: INFO: Pod "downwardapi-volume-89532c11-ed5b-4977-8c4d-a02c17eb45df": Phase="Pending", Reason="", readiness=false. Elapsed: 108.848392ms
Jul  5 18:45:26.883: INFO: Pod "downwardapi-volume-89532c11-ed5b-4977-8c4d-a02c17eb45df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218656755s
STEP: Saw pod success
Jul  5 18:45:26.883: INFO: Pod "downwardapi-volume-89532c11-ed5b-4977-8c4d-a02c17eb45df" satisfied condition "Succeeded or Failed"
Jul  5 18:45:26.998: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod downwardapi-volume-89532c11-ed5b-4977-8c4d-a02c17eb45df container client-container: <nil>
STEP: delete the pod
Jul  5 18:45:27.223: INFO: Waiting for pod downwardapi-volume-89532c11-ed5b-4977-8c4d-a02c17eb45df to disappear
Jul  5 18:45:27.333: INFO: Pod downwardapi-volume-89532c11-ed5b-4977-8c4d-a02c17eb45df no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:27.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6884" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":5,"skipped":16,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:28.812: INFO: Only supported for providers [openstack] (not aws)
... skipping 89 lines ...
Jul  5 18:45:00.219: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-5236lhmr7
STEP: creating a claim
Jul  5 18:45:00.329: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Expanding non-expandable pvc
Jul  5 18:45:00.552: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  5 18:45:00.777: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:02.997: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:05.004: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:06.997: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:09.000: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:10.999: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:12.998: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:15.000: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:16.997: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:18.997: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:20.998: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:22.998: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:24.999: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:26.998: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:28.998: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:31.001: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 5 lines ...
  	},
  	VolumeName:       "",
  	StorageClassName: &"volume-expand-5236lhmr7",
  	... // 2 identical fields
  }

Jul  5 18:45:31.220: INFO: Error updating pvc awsf8bl2: PersistentVolumeClaim "awsf8bl2" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
  	AccessModes: {"ReadWriteOnce"},
  	Selector:    nil,
  	Resources: core.ResourceRequirements{
  		Limits: nil,
- 		Requests: core.ResourceList{
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":6,"skipped":47,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI attach test using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:317
    should preserve attachment policy when no CSIDriver present
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:339
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present","total":-1,"completed":2,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:35.212: INFO: Only supported for providers [gce gke] (not aws)
... skipping 137 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support multiple inline ephemeral volumes
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:221
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":2,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:35.575: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 133 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:391
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":2,"skipped":21,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:37.509: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:37
[It] should support r/w [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/host_path.go:65
STEP: Creating a pod to test hostPath r/w
Jul  5 18:45:35.895: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1862" to be "Succeeded or Failed"
Jul  5 18:45:36.005: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 109.726541ms
Jul  5 18:45:38.118: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222913395s
STEP: Saw pod success
Jul  5 18:45:38.118: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul  5 18:45:38.228: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-host-path-test container test-container-2: <nil>
STEP: delete the pod
Jul  5 18:45:38.453: INFO: Waiting for pod pod-host-path-test to disappear
Jul  5 18:45:38.562: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      Verify if offline PVC expansion works
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:174
------------------------------
{"msg":"PASSED [sig-storage] HostPath should support r/w [NodeConformance]","total":-1,"completed":3,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:39.639: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 98 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":5,"skipped":34,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:45:28.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
• [SLOW TEST:16.407 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:46.285: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 56 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:452
    that expects NO client request
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:462
      should support a client that connects, sends DATA, and disconnects
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/portforward.go:463
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":21,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:48.038: INFO: Only supported for providers [gce gke] (not aws)
... skipping 197 lines ...
Jul  5 18:44:31.964: INFO: PersistentVolumeClaim csi-hostpath2jxj6 found but phase is Pending instead of Bound.
Jul  5 18:44:34.074: INFO: PersistentVolumeClaim csi-hostpath2jxj6 found but phase is Pending instead of Bound.
Jul  5 18:44:36.182: INFO: PersistentVolumeClaim csi-hostpath2jxj6 found but phase is Pending instead of Bound.
Jul  5 18:44:38.293: INFO: PersistentVolumeClaim csi-hostpath2jxj6 found and phase=Bound (6.439372917s)
STEP: Expanding non-expandable pvc
Jul  5 18:44:38.509: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  5 18:44:38.763: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:40.980: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:42.990: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:45.019: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:46.981: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:48.980: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:50.985: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:52.983: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:54.982: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:56.981: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:44:58.981: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:45:00.980: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:45:02.985: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:45:04.980: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:45:06.982: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:45:08.981: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:45:09.199: INFO: Error updating pvc csi-hostpath2jxj6: persistentvolumeclaims "csi-hostpath2jxj6" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  5 18:45:09.200: INFO: Deleting PersistentVolumeClaim "csi-hostpath2jxj6"
Jul  5 18:45:09.311: INFO: Waiting up to 5m0s for PersistentVolume pvc-28d7ea61-fe4f-41b5-9808-4a8c5c8514b7 to get deleted
Jul  5 18:45:09.419: INFO: PersistentVolume pvc-28d7ea61-fe4f-41b5-9808-4a8c5c8514b7 was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-1140
... skipping 54 lines ...
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":2,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:48.455: INFO: Driver hostPath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 178 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:45:52.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2297" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:52.500: INFO: Only supported for providers [azure] (not aws)
... skipping 62 lines ...
Jul  5 18:45:45.800: INFO: PersistentVolumeClaim pvc-t2ldn found but phase is Pending instead of Bound.
Jul  5 18:45:47.910: INFO: PersistentVolumeClaim pvc-t2ldn found and phase=Bound (2.219259283s)
Jul  5 18:45:47.910: INFO: Waiting up to 3m0s for PersistentVolume local-26pbp to have phase Bound
Jul  5 18:45:48.019: INFO: PersistentVolume local-26pbp found and phase=Bound (109.313898ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-dbzm
STEP: Creating a pod to test exec-volume-test
Jul  5 18:45:48.353: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-dbzm" in namespace "volume-9997" to be "Succeeded or Failed"
Jul  5 18:45:48.462: INFO: Pod "exec-volume-test-preprovisionedpv-dbzm": Phase="Pending", Reason="", readiness=false. Elapsed: 109.193628ms
Jul  5 18:45:50.573: INFO: Pod "exec-volume-test-preprovisionedpv-dbzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220121032s
STEP: Saw pod success
Jul  5 18:45:50.573: INFO: Pod "exec-volume-test-preprovisionedpv-dbzm" satisfied condition "Succeeded or Failed"
Jul  5 18:45:50.687: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-dbzm container exec-container-preprovisionedpv-dbzm: <nil>
STEP: delete the pod
Jul  5 18:45:50.919: INFO: Waiting for pod exec-volume-test-preprovisionedpv-dbzm to disappear
Jul  5 18:45:51.030: INFO: Pod exec-volume-test-preprovisionedpv-dbzm no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-dbzm
Jul  5 18:45:51.030: INFO: Deleting pod "exec-volume-test-preprovisionedpv-dbzm" in namespace "volume-9997"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":31,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 21 lines ...
Jul  5 18:45:44.552: INFO: PersistentVolumeClaim pvc-689p2 found but phase is Pending instead of Bound.
Jul  5 18:45:46.665: INFO: PersistentVolumeClaim pvc-689p2 found and phase=Bound (4.332843537s)
Jul  5 18:45:46.665: INFO: Waiting up to 3m0s for PersistentVolume local-krpfm to have phase Bound
Jul  5 18:45:46.774: INFO: PersistentVolume local-krpfm found and phase=Bound (108.861549ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-pbq2
STEP: Creating a pod to test exec-volume-test
Jul  5 18:45:47.103: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-pbq2" in namespace "volume-9124" to be "Succeeded or Failed"
Jul  5 18:45:47.212: INFO: Pod "exec-volume-test-preprovisionedpv-pbq2": Phase="Pending", Reason="", readiness=false. Elapsed: 109.194722ms
Jul  5 18:45:49.322: INFO: Pod "exec-volume-test-preprovisionedpv-pbq2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219150256s
Jul  5 18:45:51.432: INFO: Pod "exec-volume-test-preprovisionedpv-pbq2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.328983283s
STEP: Saw pod success
Jul  5 18:45:51.432: INFO: Pod "exec-volume-test-preprovisionedpv-pbq2" satisfied condition "Succeeded or Failed"
Jul  5 18:45:51.542: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod exec-volume-test-preprovisionedpv-pbq2 container exec-container-preprovisionedpv-pbq2: <nil>
STEP: delete the pod
Jul  5 18:45:51.779: INFO: Waiting for pod exec-volume-test-preprovisionedpv-pbq2 to disappear
Jul  5 18:45:51.888: INFO: Pod exec-volume-test-preprovisionedpv-pbq2 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-pbq2
Jul  5 18:45:51.889: INFO: Deleting pod "exec-volume-test-preprovisionedpv-pbq2" in namespace "volume-9124"
... skipping 24 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":11,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSIStorageCapacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1257
    CSIStorageCapacity used, have capacity
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1300
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity","total":-1,"completed":5,"skipped":74,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:45:58.771: INFO: Only supported for providers [vsphere] (not aws)
... skipping 83 lines ...
• [SLOW TEST:12.548 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a persistent volume claim
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:481
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim","total":-1,"completed":6,"skipped":22,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
• [SLOW TEST:7.645 seconds]
[sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":-1,"completed":4,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:46:03.807: INFO: Only supported for providers [gce gke] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Jul  5 18:45:58.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Jul  5 18:45:59.557: INFO: Waiting up to 5m0s for pod "security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546" in namespace "security-context-9362" to be "Succeeded or Failed"
Jul  5 18:45:59.666: INFO: Pod "security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546": Phase="Pending", Reason="", readiness=false. Elapsed: 109.3784ms
Jul  5 18:46:01.778: INFO: Pod "security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221168809s
Jul  5 18:46:03.888: INFO: Pod "security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331539459s
STEP: Saw pod success
Jul  5 18:46:03.888: INFO: Pod "security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546" satisfied condition "Succeeded or Failed"
Jul  5 18:46:03.997: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546 container test-container: <nil>
STEP: delete the pod
Jul  5 18:46:04.228: INFO: Waiting for pod security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546 to disappear
Jul  5 18:46:04.374: INFO: Pod security-context-ea89f11d-6ef8-4e2c-b7fc-708ce51c7546 no longer exists
[AfterEach] [sig-node] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 27 lines ...
      Driver csi-hostpath doesn't support ext3 -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:121
------------------------------
S
------------------------------
{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":7,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  pod should support shared volumes between containers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":6,"skipped":87,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:46:06.028: INFO: Only supported for providers [gce gke] (not aws)
... skipping 459 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support exec through kubectl proxy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:473
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":5,"skipped":43,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:46:21.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 10 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:46:24.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8874" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":6,"skipped":43,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support existing directory
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
Jul  5 18:46:19.715: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  5 18:46:19.825: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-klvp
STEP: Creating a pod to test subpath
Jul  5 18:46:19.939: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-klvp" in namespace "provisioning-5561" to be "Succeeded or Failed"
Jul  5 18:46:20.049: INFO: Pod "pod-subpath-test-inlinevolume-klvp": Phase="Pending", Reason="", readiness=false. Elapsed: 109.895655ms
Jul  5 18:46:22.159: INFO: Pod "pod-subpath-test-inlinevolume-klvp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220383545s
Jul  5 18:46:24.271: INFO: Pod "pod-subpath-test-inlinevolume-klvp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.332295216s
STEP: Saw pod success
Jul  5 18:46:24.271: INFO: Pod "pod-subpath-test-inlinevolume-klvp" satisfied condition "Succeeded or Failed"
Jul  5 18:46:24.381: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-klvp container test-container-volume-inlinevolume-klvp: <nil>
STEP: delete the pod
Jul  5 18:46:24.607: INFO: Waiting for pod pod-subpath-test-inlinevolume-klvp to disappear
Jul  5 18:46:24.716: INFO: Pod pod-subpath-test-inlinevolume-klvp no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-klvp
Jul  5 18:46:24.716: INFO: Deleting pod "pod-subpath-test-inlinevolume-klvp" in namespace "provisioning-5561"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":5,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:46:25.168: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 181 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:46:28.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7668" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":59,"failed":0}

SS
------------------------------
[BeforeEach] [sig-windows] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/windows/framework.go:28
Jul  5 18:46:28.549: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 35 lines ...
STEP: Registering the crd webhook via the AdmissionRegistration API
Jul  5 18:45:40.131: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:45:50.462: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:46:00.755: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:46:11.054: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:46:21.283: INFO: Waiting for webhook configuration to be ready...
Jul  5 18:46:21.284: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc0002b8240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 456 lines ...
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 18:46:21.284: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc0002b8240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:2059
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":6,"skipped":56,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:46:30.453: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 43 lines ...
Jul  5 18:46:28.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  5 18:46:29.252: INFO: Waiting up to 5m0s for pod "pod-ddaaac07-d821-40f3-917f-2b0f5d300acd" in namespace "emptydir-3954" to be "Succeeded or Failed"
Jul  5 18:46:29.362: INFO: Pod "pod-ddaaac07-d821-40f3-917f-2b0f5d300acd": Phase="Pending", Reason="", readiness=false. Elapsed: 110.318797ms
Jul  5 18:46:31.473: INFO: Pod "pod-ddaaac07-d821-40f3-917f-2b0f5d300acd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.221010888s
STEP: Saw pod success
Jul  5 18:46:31.473: INFO: Pod "pod-ddaaac07-d821-40f3-917f-2b0f5d300acd" satisfied condition "Succeeded or Failed"
Jul  5 18:46:31.583: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod pod-ddaaac07-d821-40f3-917f-2b0f5d300acd container test-container: <nil>
STEP: delete the pod
Jul  5 18:46:31.809: INFO: Waiting for pod pod-ddaaac07-d821-40f3-917f-2b0f5d300acd to disappear
Jul  5 18:46:31.918: INFO: Pod pod-ddaaac07-d821-40f3-917f-2b0f5d300acd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:46:31.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3954" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command","total":-1,"completed":6,"skipped":27,"failed":0}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:45:30.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 42 lines ...
Jul  5 18:45:45.518: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-5628s] to have phase Bound
Jul  5 18:45:45.627: INFO: PersistentVolumeClaim pvc-5628s found and phase=Bound (109.331976ms)
STEP: Deleting the previously created pod
Jul  5 18:45:56.174: INFO: Deleting pod "pvc-volume-tester-9btmm" in namespace "csi-mock-volumes-1984"
Jul  5 18:45:56.284: INFO: Wait up to 5m0s for pod "pvc-volume-tester-9btmm" to be fully deleted
STEP: Checking CSI driver logs
Jul  5 18:46:10.622: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/63b90247-2a21-4f82-b21a-2a6097a764ed/volumes/kubernetes.io~csi/pvc-b021b5f1-1940-4a77-a0c1-ef8b891ef554/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-9btmm
Jul  5 18:46:10.622: INFO: Deleting pod "pvc-volume-tester-9btmm" in namespace "csi-mock-volumes-1984"
STEP: Deleting claim pvc-5628s
Jul  5 18:46:10.950: INFO: Waiting up to 2m0s for PersistentVolume pvc-b021b5f1-1940-4a77-a0c1-ef8b891ef554 to get deleted
Jul  5 18:46:11.059: INFO: PersistentVolume pvc-b021b5f1-1940-4a77-a0c1-ef8b891ef554 was removed
STEP: Deleting storageclass csi-mock-volumes-1984-sc49qqg
... skipping 43 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when CSIDriver does not exist
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist","total":-1,"completed":7,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI Volume expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:562
    should expand volume by restarting pod if attach=off, nodeExpansion=on
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:591
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":3,"skipped":28,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:46:34.527: INFO: Driver hostPath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 18:46:33.850: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f062ea26-4c26-4faf-bb8d-0a7bedfd6cff" in namespace "downward-api-7420" to be "Succeeded or Failed"
Jul  5 18:46:33.962: INFO: Pod "downwardapi-volume-f062ea26-4c26-4faf-bb8d-0a7bedfd6cff": Phase="Pending", Reason="", readiness=false. Elapsed: 111.080632ms
Jul  5 18:46:36.074: INFO: Pod "downwardapi-volume-f062ea26-4c26-4faf-bb8d-0a7bedfd6cff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.223840025s
STEP: Saw pod success
Jul  5 18:46:36.074: INFO: Pod "downwardapi-volume-f062ea26-4c26-4faf-bb8d-0a7bedfd6cff" satisfied condition "Succeeded or Failed"
Jul  5 18:46:36.183: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod downwardapi-volume-f062ea26-4c26-4faf-bb8d-0a7bedfd6cff container client-container: <nil>
STEP: delete the pod
Jul  5 18:46:36.408: INFO: Waiting for pod downwardapi-volume-f062ea26-4c26-4faf-bb8d-0a7bedfd6cff to disappear
Jul  5 18:46:36.517: INFO: Pod downwardapi-volume-f062ea26-4c26-4faf-bb8d-0a7bedfd6cff no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:46:36.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7420" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:46:36.754: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 72 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":68,"failed":0}
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:46:32.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
• [SLOW TEST:6.272 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
Jul  5 18:46:38.693: INFO: AfterEach: Cleaning up test resources.
Jul  5 18:46:38.693: INFO: Deleting PersistentVolumeClaim "pvc-2kcfd"
Jul  5 18:46:38.802: INFO: Deleting PersistentVolume "hostpath-drhrc"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately","total":-1,"completed":9,"skipped":40,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:46:38.935: INFO: >>> kubeConfig: /root/.kube/config
... skipping 147 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read/write inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:166
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume","total":-1,"completed":6,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:46:39.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  5 18:46:40.596: INFO: Waiting up to 5m0s for pod "downward-api-b815188d-6840-4adf-b0ae-a53d974ff895" in namespace "downward-api-6687" to be "Succeeded or Failed"
Jul  5 18:46:40.705: INFO: Pod "downward-api-b815188d-6840-4adf-b0ae-a53d974ff895": Phase="Pending", Reason="", readiness=false. Elapsed: 109.14184ms
Jul  5 18:46:42.816: INFO: Pod "downward-api-b815188d-6840-4adf-b0ae-a53d974ff895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219871095s
STEP: Saw pod success
Jul  5 18:46:42.816: INFO: Pod "downward-api-b815188d-6840-4adf-b0ae-a53d974ff895" satisfied condition "Succeeded or Failed"
Jul  5 18:46:42.937: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod downward-api-b815188d-6840-4adf-b0ae-a53d974ff895 container dapi-container: <nil>
STEP: delete the pod
Jul  5 18:46:43.168: INFO: Waiting for pod downward-api-b815188d-6840-4adf-b0ae-a53d974ff895 to disappear
Jul  5 18:46:43.278: INFO: Pod downward-api-b815188d-6840-4adf-b0ae-a53d974ff895 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:46:43.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6687" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":41,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 18:46:44.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d00d146d-1a13-48cd-9bb3-e15194dcea0a" in namespace "projected-1097" to be "Succeeded or Failed"
Jul  5 18:46:44.294: INFO: Pod "downwardapi-volume-d00d146d-1a13-48cd-9bb3-e15194dcea0a": Phase="Pending", Reason="", readiness=false. Elapsed: 108.978279ms
Jul  5 18:46:46.404: INFO: Pod "downwardapi-volume-d00d146d-1a13-48cd-9bb3-e15194dcea0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219676893s
STEP: Saw pod success
Jul  5 18:46:46.404: INFO: Pod "downwardapi-volume-d00d146d-1a13-48cd-9bb3-e15194dcea0a" satisfied condition "Succeeded or Failed"
Jul  5 18:46:46.514: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod downwardapi-volume-d00d146d-1a13-48cd-9bb3-e15194dcea0a container client-container: <nil>
STEP: delete the pod
Jul  5 18:46:46.739: INFO: Waiting for pod downwardapi-volume-d00d146d-1a13-48cd-9bb3-e15194dcea0a to disappear
Jul  5 18:46:46.848: INFO: Pod downwardapi-volume-d00d146d-1a13-48cd-9bb3-e15194dcea0a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:46:46.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1097" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
STEP: creating replication controller externalname-service in namespace services-8521
I0705 18:44:18.004931   12502 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8521, replica count: 2
I0705 18:44:21.155438   12502 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 18:44:21.155: INFO: Creating new exec pod
Jul  5 18:44:26.487: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:44:32.766: INFO: rc: 1
Jul  5 18:44:32.766: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:44:33.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:44:40.049: INFO: rc: 1
Jul  5 18:44:40.049: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:44:40.768: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:44:46.979: INFO: rc: 1
Jul  5 18:44:46.979: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:44:47.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:44:53.976: INFO: rc: 1
Jul  5 18:44:53.977: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:44:54.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:01.042: INFO: rc: 1
Jul  5 18:45:01.042: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:01.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:07.974: INFO: rc: 1
Jul  5 18:45:07.975: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:08.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:15.002: INFO: rc: 1
Jul  5 18:45:15.002: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:15.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:21.939: INFO: rc: 1
Jul  5 18:45:21.939: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:22.768: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:28.943: INFO: rc: 1
Jul  5 18:45:28.943: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w+  2 externalname-serviceecho 80
 hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:29.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:35.959: INFO: rc: 1
Jul  5 18:45:35.960: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:36.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:43.105: INFO: rc: 1
Jul  5 18:45:43.105: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo+  hostName
nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:43.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:49.978: INFO: rc: 1
Jul  5 18:45:49.978: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:50.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:45:56.936: INFO: rc: 1
Jul  5 18:45:56.936: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:45:57.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:46:03.915: INFO: rc: 1
Jul  5 18:46:03.915: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:46:04.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:46:10.940: INFO: rc: 1
Jul  5 18:46:10.940: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:46:11.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:46:17.939: INFO: rc: 1
Jul  5 18:46:17.939: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:46:18.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:46:24.919: INFO: rc: 1
Jul  5 18:46:24.919: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:46:25.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:46:31.947: INFO: rc: 1
Jul  5 18:46:31.948: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:46:32.767: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:46:38.992: INFO: rc: 1
Jul  5 18:46:38.992: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:46:38.992: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:46:45.148: INFO: rc: 1
Jul  5 18:46:45.148: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-8521 exec execpodnkv7b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:46:45.148: FAIL: Unexpected error:
    <*errors.errorString | 0xc0037d0150>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 216 lines ...
• Failure [152.946 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 18:46:45.148: Unexpected error:
      <*errors.errorString | 0xc0037d0150>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1330
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":2,"skipped":24,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] PV Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Jul  5 18:46:51.429: INFO: AfterEach: Cleaning up test resources.
Jul  5 18:46:51.429: INFO: pvc is nil
Jul  5 18:46:51.429: INFO: Deleting PersistentVolume "hostpath-jc9w4"

•
------------------------------
{"msg":"PASSED [sig-storage] PV Protection Verify \"immediate\" deletion of a PV that is not bound to a PVC","total":-1,"completed":3,"skipped":28,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:46:53.374: INFO: Only supported for providers [openstack] (not aws)
... skipping 176 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI FSGroupPolicy [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1559
    should modify fsGroup if fsGroupPolicy=default
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:1583
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default","total":-1,"completed":6,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] CronJob
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:37.206 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should be able to schedule after more than 100 missed schedule
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189
------------------------------
{"msg":"PASSED [sig-apps] CronJob should be able to schedule after more than 100 missed schedule","total":-1,"completed":7,"skipped":45,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:02.344: INFO: Only supported for providers [gce gke] (not aws)
... skipping 220 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":8,"skipped":28,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:05.699: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 59 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":4,"skipped":46,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly directory specified in the volumeMount
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
Jul  5 18:47:02.965: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  5 18:47:02.965: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-hkxr
STEP: Creating a pod to test subpath
Jul  5 18:47:03.077: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-hkxr" in namespace "provisioning-3336" to be "Succeeded or Failed"
Jul  5 18:47:03.186: INFO: Pod "pod-subpath-test-inlinevolume-hkxr": Phase="Pending", Reason="", readiness=false. Elapsed: 108.702667ms
Jul  5 18:47:05.295: INFO: Pod "pod-subpath-test-inlinevolume-hkxr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217867468s
Jul  5 18:47:07.404: INFO: Pod "pod-subpath-test-inlinevolume-hkxr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.327489133s
STEP: Saw pod success
Jul  5 18:47:07.405: INFO: Pod "pod-subpath-test-inlinevolume-hkxr" satisfied condition "Succeeded or Failed"
Jul  5 18:47:07.513: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-hkxr container test-container-subpath-inlinevolume-hkxr: <nil>
STEP: delete the pod
Jul  5 18:47:07.738: INFO: Waiting for pod pod-subpath-test-inlinevolume-hkxr to disappear
Jul  5 18:47:07.846: INFO: Pod pod-subpath-test-inlinevolume-hkxr no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-hkxr
Jul  5 18:47:07.846: INFO: Deleting pod "pod-subpath-test-inlinevolume-hkxr" in namespace "provisioning-3336"
... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":8,"skipped":63,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 19 lines ...
Jul  5 18:46:59.585: INFO: PersistentVolumeClaim pvc-bp9q5 found but phase is Pending instead of Bound.
Jul  5 18:47:01.694: INFO: PersistentVolumeClaim pvc-bp9q5 found and phase=Bound (10.658354865s)
Jul  5 18:47:01.694: INFO: Waiting up to 3m0s for PersistentVolume local-xqm2h to have phase Bound
Jul  5 18:47:01.803: INFO: PersistentVolume local-xqm2h found and phase=Bound (108.868879ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-99vg
STEP: Creating a pod to test subpath
Jul  5 18:47:02.132: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-99vg" in namespace "provisioning-2817" to be "Succeeded or Failed"
Jul  5 18:47:02.251: INFO: Pod "pod-subpath-test-preprovisionedpv-99vg": Phase="Pending", Reason="", readiness=false. Elapsed: 118.894303ms
Jul  5 18:47:04.360: INFO: Pod "pod-subpath-test-preprovisionedpv-99vg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228424979s
Jul  5 18:47:06.471: INFO: Pod "pod-subpath-test-preprovisionedpv-99vg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.338622418s
STEP: Saw pod success
Jul  5 18:47:06.471: INFO: Pod "pod-subpath-test-preprovisionedpv-99vg" satisfied condition "Succeeded or Failed"
Jul  5 18:47:06.580: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-99vg container test-container-volume-preprovisionedpv-99vg: <nil>
STEP: delete the pod
Jul  5 18:47:06.806: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-99vg to disappear
Jul  5 18:47:06.915: INFO: Pod pod-subpath-test-preprovisionedpv-99vg no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-99vg
Jul  5 18:47:06.915: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-99vg" in namespace "provisioning-2817"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":12,"skipped":44,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:08.478: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 78 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should list, patch and delete a collection of StatefulSets
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:895
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets","total":-1,"completed":9,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Jul  5 18:47:09.556: INFO: Creating a PV followed by a PVC
Jul  5 18:47:09.778: INFO: Waiting for PV local-pvxr526 to bind to PVC pvc-h9dth
Jul  5 18:47:09.778: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-h9dth] to have phase Bound
Jul  5 18:47:09.888: INFO: PersistentVolumeClaim pvc-h9dth found and phase=Bound (110.288453ms)
Jul  5 18:47:09.888: INFO: Waiting up to 3m0s for PersistentVolume local-pvxr526 to have phase Bound
Jul  5 18:47:09.998: INFO: PersistentVolume local-pvxr526 found and phase=Bound (109.855829ms)
[It] should fail scheduling due to different NodeSelector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
STEP: local-volume-type: dir
Jul  5 18:47:10.330: INFO: Waiting up to 5m0s for pod "pod-e6374b4a-43d9-44c6-b88c-ca46da228dc1" in namespace "persistent-local-volumes-test-3676" to be "Unschedulable"
Jul  5 18:47:10.440: INFO: Pod "pod-e6374b4a-43d9-44c6-b88c-ca46da228dc1": Phase="Pending", Reason="", readiness=false. Elapsed: 110.475058ms
Jul  5 18:47:10.440: INFO: Pod "pod-e6374b4a-43d9-44c6-b88c-ca46da228dc1" satisfied condition "Unschedulable"
[AfterEach] Pod with node different from PV's NodeAffinity
... skipping 12 lines ...

• [SLOW TEST:6.066 seconds]
[sig-storage] PersistentVolumes-local 
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:347
    should fail scheduling due to different NodeSelector
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:379
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":5,"skipped":49,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:47:11.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-837" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":13,"skipped":59,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:11.885: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
Jul  5 18:47:12.049: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.781 seconds]
[sig-storage] PersistentVolumes GCEPD
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:127

  Only supported for providers [gce gke] (not aws)

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-gce.go:85
------------------------------
... skipping 10 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: block]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 149 lines ...
• [SLOW TEST:7.806 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-instrumentation] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 90 lines ...
• [SLOW TEST:6.546 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Deployment Status endpoints
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:488
------------------------------
{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints","total":-1,"completed":10,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:18.685: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 198 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":7,"skipped":57,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 35 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":34,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:46:20.329: INFO: >>> kubeConfig: /root/.kube/config
... skipping 57 lines ...
Jul  5 18:46:28.579: INFO: PersistentVolumeClaim csi-hostpathhwclp found but phase is Pending instead of Bound.
Jul  5 18:46:30.689: INFO: PersistentVolumeClaim csi-hostpathhwclp found but phase is Pending instead of Bound.
Jul  5 18:46:32.799: INFO: PersistentVolumeClaim csi-hostpathhwclp found but phase is Pending instead of Bound.
Jul  5 18:46:34.909: INFO: PersistentVolumeClaim csi-hostpathhwclp found and phase=Bound (8.548500939s)
STEP: Expanding non-expandable pvc
Jul  5 18:46:35.130: INFO: currentPvcSize {{1073741824 0} {<nil>} 1Gi BinarySI}, newSize {{2147483648 0} {<nil>}  BinarySI}
Jul  5 18:46:35.348: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:37.567: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:39.567: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:41.567: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:43.568: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:45.569: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:47.567: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:49.575: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:51.567: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:53.567: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:55.567: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:57.570: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:46:59.570: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:47:01.568: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:47:03.568: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:47:05.573: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  5 18:47:05.791: INFO: Error updating pvc csi-hostpathhwclp: persistentvolumeclaims "csi-hostpathhwclp" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  5 18:47:05.791: INFO: Deleting PersistentVolumeClaim "csi-hostpathhwclp"
Jul  5 18:47:05.902: INFO: Waiting up to 5m0s for PersistentVolume pvc-b818d3eb-d213-43f3-8314-81953e4b2e0f to get deleted
Jul  5 18:47:06.010: INFO: PersistentVolume pvc-b818d3eb-d213-43f3-8314-81953e4b2e0f was removed
STEP: Deleting sc
STEP: deleting the test namespace: volume-expand-9568
... skipping 52 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should not allow expansion of pvcs without AllowVolumeExpansion property
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:157
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":6,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:23.120: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 48 lines ...
Jul  5 18:47:21.413: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jul  5 18:47:21.413: INFO: stdout: "scheduler etcd-0 controller-manager etcd-1"
STEP: getting details of componentstatuses
STEP: getting status of scheduler
Jul  5 18:47:21.413: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6392 get componentstatuses scheduler'
Jul  5 18:47:21.855: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jul  5 18:47:21.855: INFO: stdout: "NAME        STATUS    MESSAGE   ERROR\nscheduler   Healthy   ok        \n"
STEP: getting status of etcd-0
Jul  5 18:47:21.856: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6392 get componentstatuses etcd-0'
Jul  5 18:47:22.297: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jul  5 18:47:22.297: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
STEP: getting status of controller-manager
Jul  5 18:47:22.297: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6392 get componentstatuses controller-manager'
Jul  5 18:47:22.707: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jul  5 18:47:22.707: INFO: stdout: "NAME                 STATUS    MESSAGE   ERROR\ncontroller-manager   Healthy   ok        \n"
STEP: getting status of etcd-1
Jul  5 18:47:22.707: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-6392 get componentstatuses etcd-1'
Jul  5 18:47:23.111: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Jul  5 18:47:23.112: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-1   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:47:23.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6392" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":8,"skipped":58,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:23.351: INFO: Driver "local" does not provide raw block - skipping
... skipping 65 lines ...
Jul  5 18:47:15.811: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-gnrfb] to have phase Bound
Jul  5 18:47:15.921: INFO: PersistentVolumeClaim pvc-gnrfb found and phase=Bound (109.736774ms)
Jul  5 18:47:15.921: INFO: Waiting up to 3m0s for PersistentVolume local-l4v5w to have phase Bound
Jul  5 18:47:16.031: INFO: PersistentVolume local-l4v5w found and phase=Bound (110.422516ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-nmbv
STEP: Creating a pod to test subpath
Jul  5 18:47:16.367: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-nmbv" in namespace "provisioning-5632" to be "Succeeded or Failed"
Jul  5 18:47:16.478: INFO: Pod "pod-subpath-test-preprovisionedpv-nmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 110.255066ms
Jul  5 18:47:18.592: INFO: Pod "pod-subpath-test-preprovisionedpv-nmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22411769s
Jul  5 18:47:20.703: INFO: Pod "pod-subpath-test-preprovisionedpv-nmbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335252065s
Jul  5 18:47:22.813: INFO: Pod "pod-subpath-test-preprovisionedpv-nmbv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.445853043s
STEP: Saw pod success
Jul  5 18:47:22.813: INFO: Pod "pod-subpath-test-preprovisionedpv-nmbv" satisfied condition "Succeeded or Failed"
Jul  5 18:47:22.923: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-nmbv container test-container-volume-preprovisionedpv-nmbv: <nil>
STEP: delete the pod
Jul  5 18:47:23.151: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-nmbv to disappear
Jul  5 18:47:23.264: INFO: Pod pod-subpath-test-preprovisionedpv-nmbv no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-nmbv
Jul  5 18:47:23.264: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-nmbv" in namespace "provisioning-5632"
... skipping 22 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":6,"skipped":51,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:25.576: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 83 lines ...
      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
S
------------------------------
{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":10,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:47:17.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing directories when readOnly specified in the volumeSource
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:394
Jul  5 18:47:18.142: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  5 18:47:18.363: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7936" in namespace "provisioning-7936" to be "Succeeded or Failed"
Jul  5 18:47:18.471: INFO: Pod "hostpath-symlink-prep-provisioning-7936": Phase="Pending", Reason="", readiness=false. Elapsed: 108.362944ms
Jul  5 18:47:20.581: INFO: Pod "hostpath-symlink-prep-provisioning-7936": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218371011s
Jul  5 18:47:22.692: INFO: Pod "hostpath-symlink-prep-provisioning-7936": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329396227s
STEP: Saw pod success
Jul  5 18:47:22.692: INFO: Pod "hostpath-symlink-prep-provisioning-7936" satisfied condition "Succeeded or Failed"
Jul  5 18:47:22.692: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7936" in namespace "provisioning-7936"
Jul  5 18:47:22.807: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7936" to be fully deleted
Jul  5 18:47:22.915: INFO: Creating resource for inline volume
Jul  5 18:47:22.915: INFO: Driver hostPathSymlink on volume type InlineVolume doesn't support readOnly source
STEP: Deleting pod
Jul  5 18:47:22.916: INFO: Deleting pod "pod-subpath-test-inlinevolume-znvn" in namespace "provisioning-7936"
Jul  5 18:47:23.134: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-7936" in namespace "provisioning-7936" to be "Succeeded or Failed"
Jul  5 18:47:23.243: INFO: Pod "hostpath-symlink-prep-provisioning-7936": Phase="Pending", Reason="", readiness=false. Elapsed: 108.300549ms
Jul  5 18:47:25.357: INFO: Pod "hostpath-symlink-prep-provisioning-7936": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.222107395s
STEP: Saw pod success
Jul  5 18:47:25.357: INFO: Pod "hostpath-symlink-prep-provisioning-7936" satisfied condition "Succeeded or Failed"
Jul  5 18:47:25.357: INFO: Deleting pod "hostpath-symlink-prep-provisioning-7936" in namespace "provisioning-7936"
Jul  5 18:47:25.490: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-7936" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:47:25.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-7936" for this suite.
... skipping 99 lines ...
• [SLOW TEST:5.674 seconds]
[sig-apps] DisruptionController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:265
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":7,"skipped":60,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:31.325: INFO: Driver local doesn't support ext3 -- skipping
... skipping 138 lines ...
• [SLOW TEST:6.272 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 100 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:992
    should create/apply a CR with unknown fields for CRD with no validation schema
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:993
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":9,"skipped":75,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:38.795: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 249 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393
    should be able to retrieve and filter logs  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":11,"skipped":119,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:47:37.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  5 18:47:37.923: INFO: Waiting up to 5m0s for pod "pod-6510d837-fb6e-4519-bb16-de35222bdf19" in namespace "emptydir-1649" to be "Succeeded or Failed"
Jul  5 18:47:38.032: INFO: Pod "pod-6510d837-fb6e-4519-bb16-de35222bdf19": Phase="Pending", Reason="", readiness=false. Elapsed: 109.375608ms
Jul  5 18:47:40.142: INFO: Pod "pod-6510d837-fb6e-4519-bb16-de35222bdf19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219372212s
STEP: Saw pod success
Jul  5 18:47:40.142: INFO: Pod "pod-6510d837-fb6e-4519-bb16-de35222bdf19" satisfied condition "Succeeded or Failed"
Jul  5 18:47:40.251: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-6510d837-fb6e-4519-bb16-de35222bdf19 container test-container: <nil>
STEP: delete the pod
Jul  5 18:47:40.475: INFO: Waiting for pod pod-6510d837-fb6e-4519-bb16-de35222bdf19 to disappear
Jul  5 18:47:40.583: INFO: Pod pod-6510d837-fb6e-4519-bb16-de35222bdf19 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:47:40.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1649" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":48,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:40.854: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 69 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 18:47:40.658: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6dd3be8-c167-4904-a550-573693df15a0" in namespace "downward-api-7651" to be "Succeeded or Failed"
Jul  5 18:47:40.767: INFO: Pod "downwardapi-volume-a6dd3be8-c167-4904-a550-573693df15a0": Phase="Pending", Reason="", readiness=false. Elapsed: 109.513957ms
Jul  5 18:47:42.878: INFO: Pod "downwardapi-volume-a6dd3be8-c167-4904-a550-573693df15a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220480497s
STEP: Saw pod success
Jul  5 18:47:42.879: INFO: Pod "downwardapi-volume-a6dd3be8-c167-4904-a550-573693df15a0" satisfied condition "Succeeded or Failed"
Jul  5 18:47:42.988: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod downwardapi-volume-a6dd3be8-c167-4904-a550-573693df15a0 container client-container: <nil>
STEP: delete the pod
Jul  5 18:47:43.215: INFO: Waiting for pod downwardapi-volume-a6dd3be8-c167-4904-a550-573693df15a0 to disappear
Jul  5 18:47:43.327: INFO: Pod downwardapi-volume-a6dd3be8-c167-4904-a550-573693df15a0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:47:43.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7651" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":121,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:43.567: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 75 lines ...
Jul  5 18:47:40.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  5 18:47:41.554: INFO: Waiting up to 5m0s for pod "pod-ff62063e-010d-4a5c-80d5-c8c814758603" in namespace "emptydir-9239" to be "Succeeded or Failed"
Jul  5 18:47:41.662: INFO: Pod "pod-ff62063e-010d-4a5c-80d5-c8c814758603": Phase="Pending", Reason="", readiness=false. Elapsed: 108.749806ms
Jul  5 18:47:43.772: INFO: Pod "pod-ff62063e-010d-4a5c-80d5-c8c814758603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218153298s
STEP: Saw pod success
Jul  5 18:47:43.772: INFO: Pod "pod-ff62063e-010d-4a5c-80d5-c8c814758603" satisfied condition "Succeeded or Failed"
Jul  5 18:47:43.881: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-ff62063e-010d-4a5c-80d5-c8c814758603 container test-container: <nil>
STEP: delete the pod
Jul  5 18:47:44.107: INFO: Waiting for pod pod-ff62063e-010d-4a5c-80d5-c8c814758603 to disappear
Jul  5 18:47:44.216: INFO: Pod pod-ff62063e-010d-4a5c-80d5-c8c814758603 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:47:44.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9239" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":61,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:47:44.448: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 20 lines ...
Jul  5 18:47:23.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename volume
STEP: Waiting for a default service account to be provisioned in namespace
[It] should store data
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
Jul  5 18:47:23.682: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  5 18:47:23.908: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-9854" in namespace "volume-9854" to be "Succeeded or Failed"
Jul  5 18:47:24.016: INFO: Pod "hostpath-symlink-prep-volume-9854": Phase="Pending", Reason="", readiness=false. Elapsed: 108.83457ms
Jul  5 18:47:26.126: INFO: Pod "hostpath-symlink-prep-volume-9854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218704039s
STEP: Saw pod success
Jul  5 18:47:26.126: INFO: Pod "hostpath-symlink-prep-volume-9854" satisfied condition "Succeeded or Failed"
Jul  5 18:47:26.126: INFO: Deleting pod "hostpath-symlink-prep-volume-9854" in namespace "volume-9854"
Jul  5 18:47:26.240: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-9854" to be fully deleted
Jul  5 18:47:26.352: INFO: Creating resource for inline volume
STEP: starting hostpathsymlink-injector
STEP: Writing text file contents in the container.
Jul  5 18:47:30.684: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=volume-9854 exec hostpathsymlink-injector --namespace=volume-9854 -- /bin/sh -c echo 'Hello from hostPathSymlink from namespace volume-9854' > /opt/0/index.html'
... skipping 52 lines ...
Jul  5 18:48:08.223: INFO: Pod hostpathsymlink-client still exists
Jul  5 18:48:10.116: INFO: Waiting for pod hostpathsymlink-client to disappear
Jul  5 18:48:10.224: INFO: Pod hostpathsymlink-client still exists
Jul  5 18:48:12.116: INFO: Waiting for pod hostpathsymlink-client to disappear
Jul  5 18:48:12.224: INFO: Pod hostpathsymlink-client no longer exists
STEP: cleaning the environment after hostpathsymlink
Jul  5 18:48:12.340: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-volume-9854" in namespace "volume-9854" to be "Succeeded or Failed"
Jul  5 18:48:12.449: INFO: Pod "hostpath-symlink-prep-volume-9854": Phase="Pending", Reason="", readiness=false. Elapsed: 108.552401ms
Jul  5 18:48:14.558: INFO: Pod "hostpath-symlink-prep-volume-9854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218079159s
STEP: Saw pod success
Jul  5 18:48:14.558: INFO: Pod "hostpath-symlink-prep-volume-9854" satisfied condition "Succeeded or Failed"
Jul  5 18:48:14.558: INFO: Deleting pod "hostpath-symlink-prep-volume-9854" in namespace "volume-9854"
Jul  5 18:48:14.673: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-volume-9854" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:48:14.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "volume-9854" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":7,"skipped":38,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:48:15.049: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 157 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Simple pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:379
    should support inline execution and attach
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:555
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":8,"skipped":74,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 422 lines ...
• [SLOW TEST:11.012 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":9,"skipped":80,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:48:35.901: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
I0705 18:45:52.556289   12621 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 18:45:55.556649   12621 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jul  5 18:45:55.893: INFO: Creating new exec pod
Jul  5 18:46:00.220: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:46:16.463: INFO: rc: 1
Jul  5 18:46:16.463: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:46:18.464: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:46:34.647: INFO: rc: 1
Jul  5 18:46:34.647: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:46:36.465: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:46:52.672: INFO: rc: 1
Jul  5 18:46:52.782: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:46:54.465: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:47:10.624: INFO: rc: 1
Jul  5 18:47:10.624: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:47:12.464: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:47:28.650: INFO: rc: 1
Jul  5 18:47:28.650: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:47:30.464: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:47:46.658: INFO: rc: 1
Jul  5 18:47:46.658: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:47:48.464: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:48:04.627: INFO: rc: 1
Jul  5 18:48:04.627: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:48:06.463: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:48:22.715: INFO: rc: 1
Jul  5 18:48:22.717: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:48:22.717: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-4830 exec execpodhltx5 -- /bin/sh -x -c nslookup nodeport-service.services-4830.svc.cluster.local'
Jul  5 18:48:38.870: INFO: rc: 1
Jul  5 18:48:38.870: INFO: ExternalName service "services-4830/execpodhltx5" failed to resolve to IP
Jul  5 18:48:38.871: FAIL: Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 227 lines ...
• Failure [186.686 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 18:48:38.871: Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1455
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":2,"skipped":32,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:48:55.228: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 253 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
[It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:48:58.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-5269" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":3,"skipped":46,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:48:58.299: INFO: Only supported for providers [azure] (not aws)
... skipping 99 lines ...
Jul  5 18:43:56.526: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass volume-8187m8lwz
STEP: creating a claim
Jul  5 18:43:56.645: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod exec-volume-test-dynamicpv-cn2g
STEP: Creating a pod to test exec-volume-test
Jul  5 18:43:57.010: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-cn2g" in namespace "volume-8187" to be "Succeeded or Failed"
Jul  5 18:43:57.119: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 108.456892ms
Jul  5 18:43:59.230: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219560585s
Jul  5 18:44:01.339: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328486099s
Jul  5 18:44:03.450: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439276242s
Jul  5 18:44:05.565: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555038598s
Jul  5 18:44:07.674: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663703915s
... skipping 135 lines ...
Jul  5 18:48:54.730: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.719872734s
Jul  5 18:48:56.840: INFO: Pod "exec-volume-test-dynamicpv-cn2g": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.829070162s
Jul  5 18:48:59.057: INFO: Output of node "" pod "exec-volume-test-dynamicpv-cn2g" container "exec-container-dynamicpv-cn2g": 
STEP: delete the pod
Jul  5 18:48:59.170: INFO: Waiting for pod exec-volume-test-dynamicpv-cn2g to disappear
Jul  5 18:48:59.278: INFO: Pod exec-volume-test-dynamicpv-cn2g no longer exists
Jul  5 18:48:59.279: FAIL: Unexpected error:
    <*errors.errorString | 0xc0032413b0>: {
        s: "expected pod \"exec-volume-test-dynamicpv-cn2g\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-dynamicpv-cn2g\" to be \"Succeeded or Failed\"",
    }
    expected pod "exec-volume-test-dynamicpv-cn2g" success: Gave up after waiting 5m0s for pod "exec-volume-test-dynamicpv-cn2g" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00220f600, 0x6fd77e0, 0x10, 0xc0027e6000, 0x0, 0xc000cb90d8, 0x1, 0x1, 0x71d34c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-8187".
STEP: Found 5 events.
Jul  5 18:48:59.728: INFO: At 2021-07-05 18:43:56 +0000 UTC - event for aws2lrzj: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 18:48:59.728: INFO: At 2021-07-05 18:43:56 +0000 UTC - event for aws2lrzj: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "volume-8187/aws2lrzj"
Jul  5 18:48:59.728: INFO: At 2021-07-05 18:43:56 +0000 UTC - event for aws2lrzj: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 18:48:59.728: INFO: At 2021-07-05 18:44:06 +0000 UTC - event for aws2lrzj: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "volume-8187m8lwz": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 18:48:59.728: INFO: At 2021-07-05 18:44:29 +0000 UTC - event for aws2lrzj: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "volume-8187m8lwz": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 18:48:59.837: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 18:48:59.837: INFO: 
Jul  5 18:49:00.054: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 18:49:00.163: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 6905 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2021-07-05 18:45:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-04eb7308128e98652],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 195 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Jul  5 18:48:59.279: Unexpected error:
          <*errors.errorString | 0xc0032413b0>: {
              s: "expected pod \"exec-volume-test-dynamicpv-cn2g\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-dynamicpv-cn2g\" to be \"Succeeded or Failed\"",
          }
          expected pod "exec-volume-test-dynamicpv-cn2g" success: Gave up after waiting 5m0s for pod "exec-volume-test-dynamicpv-cn2g" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","total":-1,"completed":0,"skipped":0,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:04.440: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
Jul  5 18:43:56.504: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9034gldx9
STEP: creating a claim
Jul  5 18:43:56.617: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-m4l5
STEP: Creating a pod to test subpath
Jul  5 18:43:57.002: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-m4l5" in namespace "provisioning-9034" to be "Succeeded or Failed"
Jul  5 18:43:57.112: INFO: Pod "pod-subpath-test-dynamicpv-m4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 110.103751ms
Jul  5 18:43:59.222: INFO: Pod "pod-subpath-test-dynamicpv-m4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220583927s
Jul  5 18:44:01.333: INFO: Pod "pod-subpath-test-dynamicpv-m4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331287413s
Jul  5 18:44:03.444: INFO: Pod "pod-subpath-test-dynamicpv-m4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442679297s
Jul  5 18:44:05.560: INFO: Pod "pod-subpath-test-dynamicpv-m4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.558297318s
Jul  5 18:44:07.672: INFO: Pod "pod-subpath-test-dynamicpv-m4l5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.670072604s
... skipping 137 lines ...
Jul  5 18:48:59.208: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-m4l5" container "init-volume-dynamicpv-m4l5": 
Jul  5 18:48:59.318: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-m4l5" container "test-init-volume-dynamicpv-m4l5": 
Jul  5 18:48:59.429: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-m4l5" container "test-container-subpath-dynamicpv-m4l5": 
STEP: delete the pod
Jul  5 18:48:59.545: INFO: Waiting for pod pod-subpath-test-dynamicpv-m4l5 to disappear
Jul  5 18:48:59.656: INFO: Pod pod-subpath-test-dynamicpv-m4l5 no longer exists
Jul  5 18:48:59.656: FAIL: Unexpected error:
    <*errors.errorString | 0xc002da8e90>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-m4l5\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-m4l5\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-m4l5" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-m4l5" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0027066e0, 0x6fb6221, 0x7, 0xc004161c00, 0x0, 0xc0008f90e0, 0x1, 0x1, 0x71d34c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-9034".
STEP: Found 6 events.
Jul  5 18:49:00.220: INFO: At 2021-07-05 18:43:56 +0000 UTC - event for awsczhqf: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 18:49:00.220: INFO: At 2021-07-05 18:43:56 +0000 UTC - event for awsczhqf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "provisioning-9034/awsczhqf"
Jul  5 18:49:00.220: INFO: At 2021-07-05 18:43:56 +0000 UTC - event for awsczhqf: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 18:49:00.220: INFO: At 2021-07-05 18:44:06 +0000 UTC - event for awsczhqf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-9034gldx9": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 18:49:00.220: INFO: At 2021-07-05 18:44:17 +0000 UTC - event for awsczhqf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-9034gldx9": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 18:49:00.220: INFO: At 2021-07-05 18:48:59 +0000 UTC - event for pod-subpath-test-dynamicpv-m4l5: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: pod does not exist any more: pod "pod-subpath-test-dynamicpv-m4l5" not found
Jul  5 18:49:00.331: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 18:49:00.331: INFO: 
Jul  5 18:49:00.551: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 195 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364

      Jul  5 18:48:59.656: Unexpected error:
          <*errors.errorString | 0xc002da8e90>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-m4l5\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-m4l5\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-dynamicpv-m4l5" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-m4l5" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":0,"skipped":1,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
... skipping 8 lines ...
Jul  5 18:44:01.344: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  5 18:44:01.344: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  5 18:44:01.344: INFO: In creating storage class object and pvc objects for driver - sc: &StorageClass{ObjectMeta:{provisioning-15945n2cn      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:nil,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},}, pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1594    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-15945n2cn,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}, src-pvc: &PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1594    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-15945n2cn,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: Creating a StorageClass
STEP: creating claim=&PersistentVolumeClaim{ObjectMeta:{ pvc- provisioning-1594    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] []  []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-15945n2cn,VolumeMode:nil,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
STEP: creating a pod referring to the class=&StorageClass{ObjectMeta:{provisioning-15945n2cn    14e12078-32c3-4ba3-ae63-f8807a0837fd 2151 0 2021-07-05 18:44:01 +0000 UTC <nil> <nil> map[] map[] [] []  [{e2e.test Update storage.k8s.io/v1 2021-07-05 18:44:01 +0000 UTC FieldsV1 {"f:mountOptions":{},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}} }]},Provisioner:kubernetes.io/aws-ebs,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[debug nouid32],AllowVolumeExpansion:nil,VolumeBindingMode:*WaitForFirstConsumer,AllowedTopologies:[]TopologySelectorTerm{},} claim=&PersistentVolumeClaim{ObjectMeta:{pvc-cmnzr pvc- provisioning-1594  5339d7e2-4c9f-4cfd-8bc9-a20754685014 2171 0 2021-07-05 18:44:01 +0000 UTC <nil> <nil> map[] map[] [] [kubernetes.io/pvc-protection]  [{e2e.test Update v1 2021-07-05 18:44:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:storageClassName":{},"f:volumeMode":{}}} }]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{1073741824 0} {<nil>} 1Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*provisioning-15945n2cn,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
Jul  5 18:49:02.330: FAIL: Unexpected error:
    <*errors.errorString | 0xc003674830>: {
        s: "pod \"pod-0f5ba2e2-8b8d-4503-9ba6-80254bfd76f9\" is not Running: timed out waiting for the condition",
    }
    pod "pod-0f5ba2e2-8b8d-4503-9ba6-80254bfd76f9" is not Running: timed out waiting for the condition
occurred

... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-1594".
STEP: Found 5 events.
Jul  5 18:49:02.662: INFO: At 2021-07-05 18:44:01 +0000 UTC - event for pvc-cmnzr: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 18:49:02.662: INFO: At 2021-07-05 18:44:01 +0000 UTC - event for pvc-cmnzr: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "provisioning-1594/pvc-cmnzr"
Jul  5 18:49:02.662: INFO: At 2021-07-05 18:44:01 +0000 UTC - event for pvc-cmnzr: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 18:49:02.662: INFO: At 2021-07-05 18:44:11 +0000 UTC - event for pvc-cmnzr: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-15945n2cn": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 18:49:02.662: INFO: At 2021-07-05 18:44:34 +0000 UTC - event for pvc-cmnzr: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-15945n2cn": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 18:49:02.771: INFO: POD                                       NODE  PHASE    GRACE  CONDITIONS
Jul  5 18:49:02.771: INFO: pod-0f5ba2e2-8b8d-4503-9ba6-80254bfd76f9        Pending         []
Jul  5 18:49:02.771: INFO: 
Jul  5 18:49:02.881: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 18:49:02.990: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 6905 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2021-07-05 18:45:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-04eb7308128e98652],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 202 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] provisioning
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should provision storage with mount options [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:180

      Jul  5 18:49:02.332: Unexpected error:
          <*errors.errorString | 0xc003674830>: {
              s: "pod \"pod-0f5ba2e2-8b8d-4503-9ba6-80254bfd76f9\" is not Running: timed out waiting for the condition",
          }
          pod "pod-0f5ba2e2-8b8d-4503-9ba6-80254bfd76f9" is not Running: timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/provisioning.go:418
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","total":-1,"completed":1,"skipped":48,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:07.615: INFO: Only supported for providers [openstack] (not aws)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 112 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume at the same time
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:248
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:249
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":4,"skipped":56,"failed":1,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:14.804: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 118 lines ...
Jul  5 18:49:07.063: INFO: Waiting for pod aws-injector to disappear
Jul  5 18:49:07.173: INFO: Pod aws-injector still exists
Jul  5 18:49:09.063: INFO: Waiting for pod aws-injector to disappear
Jul  5 18:49:09.173: INFO: Pod aws-injector still exists
Jul  5 18:49:11.064: INFO: Waiting for pod aws-injector to disappear
Jul  5 18:49:11.174: INFO: Pod aws-injector no longer exists
Jul  5 18:49:11.175: FAIL: Failed to create injector pod: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186 +0x3ff
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000471e00)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 11 lines ...
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-6909".
STEP: Found 5 events.
Jul  5 18:49:12.071: INFO: At 2021-07-05 18:43:57 +0000 UTC - event for pvc-sch9t: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volume-6909" not found
Jul  5 18:49:12.071: INFO: At 2021-07-05 18:44:02 +0000 UTC - event for aws-injector: {default-scheduler } Scheduled: Successfully assigned volume-6909/aws-injector to ip-172-20-47-191.eu-central-1.compute.internal
Jul  5 18:49:12.071: INFO: At 2021-07-05 18:44:18 +0000 UTC - event for aws-injector: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "aws-xd652" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 18:49:12.071: INFO: At 2021-07-05 18:44:53 +0000 UTC - event for aws-injector: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "aws-xd652" : rpc error: code = NotFound desc = Instance "i-058f6020d482b4947" not found
Jul  5 18:49:12.071: INFO: At 2021-07-05 18:46:05 +0000 UTC - event for aws-injector: {kubelet ip-172-20-47-191.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[aws-volume-0], unattached volumes=[aws-volume-0 kube-api-access-x4xc5]: timed out waiting for the condition
Jul  5 18:49:12.180: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 18:49:12.181: INFO: 
Jul  5 18:49:12.413: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 18:49:12.523: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 6905 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2021-07-05 18:45:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-04eb7308128e98652],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 206 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Jul  5 18:49:11.175: Failed to create injector pod: timed out waiting for the condition

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:16.959: INFO: Only supported for providers [gce gke] (not aws)
... skipping 12 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":0,"skipped":1,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:17.638: INFO: Only supported for providers [vsphere] (not aws)
... skipping 58 lines ...
      Only supported for node OS distro [gci ubuntu custom] (not debian)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:263
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":11,"skipped":70,"failed":0}
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 18:48:55.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 66 lines ...
Jul  5 18:49:15.262: INFO: PersistentVolumeClaim pvc-z49jr found but phase is Pending instead of Bound.
Jul  5 18:49:17.373: INFO: PersistentVolumeClaim pvc-z49jr found and phase=Bound (8.556846575s)
Jul  5 18:49:17.402: INFO: Waiting up to 3m0s for PersistentVolume local-tr4zg to have phase Bound
Jul  5 18:49:17.513: INFO: PersistentVolume local-tr4zg found and phase=Bound (110.934566ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-sj2c
STEP: Creating a pod to test subpath
Jul  5 18:49:17.846: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-sj2c" in namespace "provisioning-6181" to be "Succeeded or Failed"
Jul  5 18:49:17.956: INFO: Pod "pod-subpath-test-preprovisionedpv-sj2c": Phase="Pending", Reason="", readiness=false. Elapsed: 110.826075ms
Jul  5 18:49:20.068: INFO: Pod "pod-subpath-test-preprovisionedpv-sj2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222195038s
Jul  5 18:49:22.180: INFO: Pod "pod-subpath-test-preprovisionedpv-sj2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.334431049s
STEP: Saw pod success
Jul  5 18:49:22.180: INFO: Pod "pod-subpath-test-preprovisionedpv-sj2c" satisfied condition "Succeeded or Failed"
Jul  5 18:49:22.291: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-sj2c container test-container-volume-preprovisionedpv-sj2c: <nil>
STEP: delete the pod
Jul  5 18:49:22.518: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-sj2c to disappear
Jul  5 18:49:22.628: INFO: Pod pod-subpath-test-preprovisionedpv-sj2c no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-sj2c
Jul  5 18:49:22.629: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-sj2c" in namespace "provisioning-6181"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":3,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount"]}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:24.198: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
Jul  5 18:49:28.977: INFO: PersistentVolumeClaim pvc-q7854 found but phase is Pending instead of Bound.
Jul  5 18:49:31.087: INFO: PersistentVolumeClaim pvc-q7854 found and phase=Bound (6.436765506s)
Jul  5 18:49:31.087: INFO: Waiting up to 3m0s for PersistentVolume local-dxvf6 to have phase Bound
Jul  5 18:49:31.196: INFO: PersistentVolume local-dxvf6 found and phase=Bound (108.779699ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-xlgh
STEP: Creating a pod to test subpath
Jul  5 18:49:31.524: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-xlgh" in namespace "provisioning-4777" to be "Succeeded or Failed"
Jul  5 18:49:31.633: INFO: Pod "pod-subpath-test-preprovisionedpv-xlgh": Phase="Pending", Reason="", readiness=false. Elapsed: 108.97276ms
Jul  5 18:49:33.742: INFO: Pod "pod-subpath-test-preprovisionedpv-xlgh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21828562s
STEP: Saw pod success
Jul  5 18:49:33.742: INFO: Pod "pod-subpath-test-preprovisionedpv-xlgh" satisfied condition "Succeeded or Failed"
Jul  5 18:49:33.851: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-xlgh container test-container-subpath-preprovisionedpv-xlgh: <nil>
STEP: delete the pod
Jul  5 18:49:34.076: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-xlgh to disappear
Jul  5 18:49:34.186: INFO: Pod pod-subpath-test-preprovisionedpv-xlgh no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-xlgh
Jul  5 18:49:34.186: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-xlgh" in namespace "provisioning-4777"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":12,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:35.731: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 69 lines ...
• [SLOW TEST:19.655 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":1,"skipped":6,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:49:37.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5272" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:37.495: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 88 lines ...
• [SLOW TEST:73.109 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted startup probe fails
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:319
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":10,"skipped":81,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-storage] PVC Protection
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72
Jul  5 18:44:45.064: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
STEP: Creating a PVC
Jul  5 18:44:45.286: INFO: Default storage class: "kops-csi-1-21"
Jul  5 18:44:45.286: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating a Pod that becomes Running and therefore is actively using the PVC
Jul  5 18:49:45.838: FAIL: While creating pod that uses the PVC or waiting for the Pod to become Running
Unexpected error:
    <*errors.errorString | 0xc003d58d20>: {
        s: "pod \"pvc-tester-nk8rs\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-nk8rs" is not Running: timed out waiting for the condition
occurred

... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "pvc-protection-9906".
STEP: Found 5 events.
Jul  5 18:49:45.949: INFO: At 2021-07-05 18:44:45 +0000 UTC - event for pvc-protectionh49jf: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 18:49:45.949: INFO: At 2021-07-05 18:44:45 +0000 UTC - event for pvc-protectionh49jf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "pvc-protection-9906/pvc-protectionh49jf"
Jul  5 18:49:45.949: INFO: At 2021-07-05 18:44:45 +0000 UTC - event for pvc-protectionh49jf: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 18:49:45.949: INFO: At 2021-07-05 18:44:55 +0000 UTC - event for pvc-protectionh49jf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "kops-csi-1-21": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 18:49:45.949: INFO: At 2021-07-05 18:45:06 +0000 UTC - event for pvc-protectionh49jf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "kops-csi-1-21": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 18:49:46.058: INFO: POD               NODE  PHASE    GRACE  CONDITIONS
Jul  5 18:49:46.058: INFO: pvc-tester-nk8rs        Pending         []
Jul  5 18:49:46.058: INFO: 
Jul  5 18:49:46.168: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 18:49:46.277: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 6905 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kubelet Update v1 2021-07-05 18:45:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 18:46:00 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-04eb7308128e98652],VolumesAttached:[]AttachedVolume{},Config:nil,},}
... skipping 200 lines ...
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that PVC in active use by a pod is not removed immediately [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:126

  Jul  5 18:49:45.839: While creating pod that uses the PVC or waiting for the Pod to become Running
  Unexpected error:
      <*errors.errorString | 0xc003d58d20>: {
          s: "pod \"pvc-tester-nk8rs\" is not Running: timed out waiting for the condition",
      }
      pod "pvc-tester-nk8rs" is not Running: timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:96
------------------------------
{"msg":"FAILED [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","total":-1,"completed":2,"skipped":16,"failed":1,"failures":["[sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":7,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:50.878: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 135 lines ...
STEP: Destroying namespace "services-1026" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":11,"skipped":88,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:51.310: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 48 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:49:52.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-821" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":12,"skipped":94,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:52.907: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-a5672f7e-138d-4e00-925b-67099ebed191
STEP: Creating a pod to test consume secrets
Jul  5 18:49:51.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1c520a15-f51d-402e-96cd-0e8708947927" in namespace "projected-2369" to be "Succeeded or Failed"
Jul  5 18:49:51.790: INFO: Pod "pod-projected-secrets-1c520a15-f51d-402e-96cd-0e8708947927": Phase="Pending", Reason="", readiness=false. Elapsed: 109.640128ms
Jul  5 18:49:53.901: INFO: Pod "pod-projected-secrets-1c520a15-f51d-402e-96cd-0e8708947927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.22065802s
STEP: Saw pod success
Jul  5 18:49:53.901: INFO: Pod "pod-projected-secrets-1c520a15-f51d-402e-96cd-0e8708947927" satisfied condition "Succeeded or Failed"
Jul  5 18:49:54.011: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod pod-projected-secrets-1c520a15-f51d-402e-96cd-0e8708947927 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  5 18:49:54.240: INFO: Waiting for pod pod-projected-secrets-1c520a15-f51d-402e-96cd-0e8708947927 to disappear
Jul  5 18:49:54.350: INFO: Pod pod-projected-secrets-1c520a15-f51d-402e-96cd-0e8708947927 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:49:54.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2369" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:49:54.617: INFO: Only supported for providers [azure] (not aws)
... skipping 102 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [azure] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1567
------------------------------
... skipping 16 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:50:03.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7204" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":4,"skipped":33,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0705 18:45:04.893229   12619 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0705 18:45:04.893304   12619 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  5 18:50:05.111: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:50:05.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6077" for this suite.


• [SLOW TEST:302.097 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:50:05.351: INFO: Only supported for providers [gce gke] (not aws)
... skipping 51 lines ...
STEP: creating replication controller externalname-service in namespace services-3367
I0705 18:47:40.776127   12471 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3367, replica count: 2
I0705 18:47:43.927572   12471 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 18:47:43.927: INFO: Creating new exec pod
Jul  5 18:47:47.366: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:47:53.514: INFO: rc: 1
Jul  5 18:47:53.514: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:47:54.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:00.664: INFO: rc: 1
Jul  5 18:48:00.664: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:01.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:07.692: INFO: rc: 1
Jul  5 18:48:07.692: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:08.514: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:14.668: INFO: rc: 1
Jul  5 18:48:14.669: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + echo hostNamenc
 -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:15.514: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:21.721: INFO: rc: 1
Jul  5 18:48:21.724: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:22.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:28.736: INFO: rc: 1
Jul  5 18:48:28.736: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:29.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:35.754: INFO: rc: 1
Jul  5 18:48:35.754: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:36.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:42.744: INFO: rc: 1
Jul  5 18:48:42.744: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:43.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:49.723: INFO: rc: 1
Jul  5 18:48:49.723: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:50.518: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:48:56.676: INFO: rc: 1
Jul  5 18:48:56.676: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:48:57.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:03.681: INFO: rc: 1
Jul  5 18:49:03.681: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:04.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:10.716: INFO: rc: 1
Jul  5 18:49:10.716: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:11.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:17.723: INFO: rc: 1
Jul  5 18:49:17.723: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:18.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:24.658: INFO: rc: 1
Jul  5 18:49:24.658: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:25.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:31.699: INFO: rc: 1
Jul  5 18:49:31.700: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -vecho -t hostName -w
 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:32.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:38.665: INFO: rc: 1
Jul  5 18:49:38.665: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ + nc -v -techo -w hostName 2
 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:39.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:45.740: INFO: rc: 1
Jul  5 18:49:45.740: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:46.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:52.709: INFO: rc: 1
Jul  5 18:49:52.709: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:53.515: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:49:59.711: INFO: rc: 1
Jul  5 18:49:59.711: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:49:59.711: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jul  5 18:50:05.858: INFO: rc: 1
Jul  5 18:50:05.858: INFO: Service reachability failing with error: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-3367 exec execpodc8qqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: getaddrinfo: Try again
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  5 18:50:05.858: FAIL: Unexpected error:
    <*errors.errorString | 0xc001bdc210>: {
        s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
    }
    service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
occurred

... skipping 244 lines ...
• Failure [150.517 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 18:50:05.858: Unexpected error:
      <*errors.errorString | 0xc001bdc210>: {
          s: "service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol",
      }
      service is not reachable within 2m0s timeout on endpoint externalname-service:80 over TCP protocol
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1369
------------------------------
{"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":9,"skipped":118,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
... skipping 121 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support two pods which share the same volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:183
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume","total":-1,"completed":8,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 110 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":5,"skipped":35,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 18:50:21.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-474" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":6,"skipped":37,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 18:50:21.773: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 76162 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:13:59.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5549" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":32,"skipped":308,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-node] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:15.893 seconds]
[sig-node] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777
------------------------------
{"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":13,"skipped":87,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:01.468: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 49 lines ...
STEP: Creating pause pod deployment
Jul  5 19:11:40.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:2, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109100, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109100, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109100, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109100, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-7fbd6894b6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 19:11:42.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:2, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109100, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109100, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109102, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109100, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"pause-pod-7fbd6894b6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 19:11:44.896: INFO: Waiting up to 2m0s to get response from 100.71.34.20:8080
Jul  5 19:11:44.896: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip'
Jul  5 19:12:16.080: INFO: rc: 28
Jul  5 19:12:16.080: INFO: got err: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  5 19:12:18.080: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip'
Jul  5 19:12:49.225: INFO: rc: 28
Jul  5 19:12:49.225: INFO: got err: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  5 19:12:51.226: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip'
Jul  5 19:13:22.374: INFO: rc: 28
Jul  5 19:13:22.374: INFO: got err: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  5 19:13:24.375: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip'
Jul  5 19:13:55.560: INFO: rc: 28
Jul  5 19:13:55.560: INFO: got err: error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip
command terminated with exit code 28

error:
exit status 28, retry until timeout
Jul  5 19:13:57.562: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
        },
        Code: 28,
    }
    error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:
    Command stdout:
    
    stderr:
    + curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip
    command terminated with exit code 28
    
    error:
    exit status 28
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.execSourceIPTest(0x0, 0x0, 0x0, 0x0, 0xc003fd6ee0, 0x1a, 0xc000ca3cc8, 0x15, 0xc00418fec0, 0xd, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133 +0x4d9
... skipping 257 lines ...
• Failure [156.490 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should preserve source pod IP for traffic thru service cluster IP [LinuxOnly] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:921

  Jul  5 19:13:57.562: Unexpected error:
      <exec.CodeExitError>: {
          Err: {
              s: "error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:\nCommand stdout:\n\nstderr:\n+ curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip\ncommand terminated with exit code 28\n\nerror:\nexit status 28",
          },
          Code: 28,
      }
      error running /tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-2018 exec pause-pod-7fbd6894b6-6xk2r -- /bin/sh -x -c curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip:
      Command stdout:
      
      stderr:
      + curl -q -s --connect-timeout 30 100.71.34.20:8080/clientip
      command terminated with exit code 28
      
      error:
      exit status 28
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/util.go:133
------------------------------
{"msg":"FAILED [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]","total":-1,"completed":31,"skipped":250,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:03.422: INFO: Only supported for providers [azure] (not aws)
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:91
STEP: Creating a pod to test downward API volume plugin
Jul  5 19:14:04.115: INFO: Waiting up to 5m0s for pod "metadata-volume-dc30d2d1-d687-454c-9bda-27e58ade8ff1" in namespace "projected-1532" to be "Succeeded or Failed"
Jul  5 19:14:04.224: INFO: Pod "metadata-volume-dc30d2d1-d687-454c-9bda-27e58ade8ff1": Phase="Pending", Reason="", readiness=false. Elapsed: 109.693644ms
Jul  5 19:14:06.335: INFO: Pod "metadata-volume-dc30d2d1-d687-454c-9bda-27e58ade8ff1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21999206s
STEP: Saw pod success
Jul  5 19:14:06.335: INFO: Pod "metadata-volume-dc30d2d1-d687-454c-9bda-27e58ade8ff1" satisfied condition "Succeeded or Failed"
Jul  5 19:14:06.446: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod metadata-volume-dc30d2d1-d687-454c-9bda-27e58ade8ff1 container client-container: <nil>
STEP: delete the pod
Jul  5 19:14:06.672: INFO: Waiting for pod metadata-volume-dc30d2d1-d687-454c-9bda-27e58ade8ff1 to disappear
Jul  5 19:14:06.782: INFO: Pod metadata-volume-dc30d2d1-d687-454c-9bda-27e58ade8ff1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:06.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1532" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":32,"skipped":255,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:07.016: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 258 lines ...
      Driver local doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSS
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":22,"skipped":139,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}
[BeforeEach] [sig-storage] Ephemeralstorage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:13:35.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : secret
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret","total":-1,"completed":23,"skipped":139,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:10.365: INFO: Driver "local" does not provide raw block - skipping
... skipping 25 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 19:14:07.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2aa966a7-fb15-41b7-b72d-774254030ef1" in namespace "downward-api-4662" to be "Succeeded or Failed"
Jul  5 19:14:08.018: INFO: Pod "downwardapi-volume-2aa966a7-fb15-41b7-b72d-774254030ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 109.176738ms
Jul  5 19:14:10.130: INFO: Pod "downwardapi-volume-2aa966a7-fb15-41b7-b72d-774254030ef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.220755341s
STEP: Saw pod success
Jul  5 19:14:10.130: INFO: Pod "downwardapi-volume-2aa966a7-fb15-41b7-b72d-774254030ef1" satisfied condition "Succeeded or Failed"
Jul  5 19:14:10.241: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod downwardapi-volume-2aa966a7-fb15-41b7-b72d-774254030ef1 container client-container: <nil>
STEP: delete the pod
Jul  5 19:14:10.469: INFO: Waiting for pod downwardapi-volume-2aa966a7-fb15-41b7-b72d-774254030ef1 to disappear
Jul  5 19:14:10.579: INFO: Pod downwardapi-volume-2aa966a7-fb15-41b7-b72d-774254030ef1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:10.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4662" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":296,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:10.809: INFO: Driver csi-hostpath doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 30 lines ...
STEP: Destroying namespace "apply-6542" for this suite.
[AfterEach] [sig-api-machinery] ServerSideApply
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/apply.go:56

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request","total":-1,"completed":34,"skipped":300,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:12.410: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 42 lines ...
Jul  5 19:14:14.025: INFO: PersistentVolumeClaim pvc-47xtv found but phase is Pending instead of Bound.
Jul  5 19:14:16.135: INFO: PersistentVolumeClaim pvc-47xtv found and phase=Bound (10.686606758s)
Jul  5 19:14:16.135: INFO: Waiting up to 3m0s for PersistentVolume local-w5rjk to have phase Bound
Jul  5 19:14:16.245: INFO: PersistentVolume local-w5rjk found and phase=Bound (109.533462ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-s7c7
STEP: Creating a pod to test subpath
Jul  5 19:14:16.574: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-s7c7" in namespace "provisioning-1807" to be "Succeeded or Failed"
Jul  5 19:14:16.684: INFO: Pod "pod-subpath-test-preprovisionedpv-s7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 109.383763ms
Jul  5 19:14:18.794: INFO: Pod "pod-subpath-test-preprovisionedpv-s7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220122483s
Jul  5 19:14:20.905: INFO: Pod "pod-subpath-test-preprovisionedpv-s7c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.330891099s
STEP: Saw pod success
Jul  5 19:14:20.905: INFO: Pod "pod-subpath-test-preprovisionedpv-s7c7" satisfied condition "Succeeded or Failed"
Jul  5 19:14:21.015: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-s7c7 container test-container-subpath-preprovisionedpv-s7c7: <nil>
STEP: delete the pod
Jul  5 19:14:21.238: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-s7c7 to disappear
Jul  5 19:14:21.348: INFO: Pod pod-subpath-test-preprovisionedpv-s7c7 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-s7c7
Jul  5 19:14:21.348: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-s7c7" in namespace "provisioning-1807"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":14,"skipped":93,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:22.910: INFO: Only supported for providers [openstack] (not aws)
... skipping 26 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver hostPath doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
... skipping 51 lines ...
Jul  5 19:14:22.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  5 19:14:23.641: INFO: Waiting up to 5m0s for pod "pod-c05374b9-c843-4ffc-a1ea-5d0244e1da1c" in namespace "emptydir-5718" to be "Succeeded or Failed"
Jul  5 19:14:23.750: INFO: Pod "pod-c05374b9-c843-4ffc-a1ea-5d0244e1da1c": Phase="Pending", Reason="", readiness=false. Elapsed: 109.400976ms
Jul  5 19:14:25.859: INFO: Pod "pod-c05374b9-c843-4ffc-a1ea-5d0244e1da1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218739638s
STEP: Saw pod success
Jul  5 19:14:25.859: INFO: Pod "pod-c05374b9-c843-4ffc-a1ea-5d0244e1da1c" satisfied condition "Succeeded or Failed"
Jul  5 19:14:25.969: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-c05374b9-c843-4ffc-a1ea-5d0244e1da1c container test-container: <nil>
STEP: delete the pod
Jul  5 19:14:26.197: INFO: Waiting for pod pod-c05374b9-c843-4ffc-a1ea-5d0244e1da1c to disappear
Jul  5 19:14:26.306: INFO: Pod pod-c05374b9-c843-4ffc-a1ea-5d0244e1da1c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:26.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5718" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":109,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:26.549: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 82 lines ...
Jul  5 19:14:30.656: INFO: PersistentVolumeClaim pvc-hxvhw found but phase is Pending instead of Bound.
Jul  5 19:14:32.767: INFO: PersistentVolumeClaim pvc-hxvhw found and phase=Bound (2.219955591s)
Jul  5 19:14:32.767: INFO: Waiting up to 3m0s for PersistentVolume local-kpvs8 to have phase Bound
Jul  5 19:14:32.877: INFO: PersistentVolume local-kpvs8 found and phase=Bound (110.253159ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-grf2
STEP: Creating a pod to test subpath
Jul  5 19:14:33.208: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-grf2" in namespace "provisioning-6666" to be "Succeeded or Failed"
Jul  5 19:14:33.318: INFO: Pod "pod-subpath-test-preprovisionedpv-grf2": Phase="Pending", Reason="", readiness=false. Elapsed: 109.704341ms
Jul  5 19:14:35.428: INFO: Pod "pod-subpath-test-preprovisionedpv-grf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219592388s
Jul  5 19:14:37.538: INFO: Pod "pod-subpath-test-preprovisionedpv-grf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329987606s
STEP: Saw pod success
Jul  5 19:14:37.538: INFO: Pod "pod-subpath-test-preprovisionedpv-grf2" satisfied condition "Succeeded or Failed"
Jul  5 19:14:37.648: INFO: Trying to get logs from node ip-172-20-47-191.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-grf2 container test-container-volume-preprovisionedpv-grf2: <nil>
STEP: delete the pod
Jul  5 19:14:37.874: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-grf2 to disappear
Jul  5 19:14:37.983: INFO: Pod pod-subpath-test-preprovisionedpv-grf2 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-grf2
Jul  5 19:14:37.983: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-grf2" in namespace "provisioning-6666"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing directory
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:205
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory","total":-1,"completed":16,"skipped":118,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:39.548: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 18 lines ...
Jul  5 19:14:39.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  5 19:14:41.548: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:41.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9279" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":119,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:42.048: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 109 lines ...
• [SLOW TEST:32.708 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":35,"skipped":319,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:45.201: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 14 lines ...
      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SSSSSSSS
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","total":-1,"completed":13,"skipped":164,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:13:40.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":14,"skipped":164,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:45.349: INFO: Only supported for providers [gce gke] (not aws)
... skipping 88 lines ...
• [SLOW TEST:37.424 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/resource_quota.go:583
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":24,"skipped":147,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:47.821: INFO: Only supported for providers [vsphere] (not aws)
... skipping 22 lines ...
STEP: Creating a kubernetes client
Jul  5 19:14:45.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Jul  5 19:14:46.007: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:49.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1695" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":15,"skipped":176,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:49.335: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 133 lines ...
Jul  5 19:14:45.559: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  5 19:14:45.559: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7601 describe pod agnhost-primary-f4cnz'
Jul  5 19:14:46.258: INFO: stderr: ""
Jul  5 19:14:46.258: INFO: stdout: "Name:         agnhost-primary-f4cnz\nNamespace:    kubectl-7601\nPriority:     0\nNode:         ip-172-20-60-158.eu-central-1.compute.internal/172.20.60.158\nStart Time:   Mon, 05 Jul 2021 19:14:43 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           100.96.3.61\nIPs:\n  IP:           100.96.3.61\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://fda05c7c47777c166790682333d8c20679b3a0d5c0af7ceba0db55210348f928\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 05 Jul 2021 19:14:44 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7s524 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-7s524:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned kubectl-7601/agnhost-primary-f4cnz to ip-172-20-60-158.eu-central-1.compute.internal\n  Normal  Pulled     2s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" already present on machine\n  Normal  Created    2s    kubelet            Created container agnhost-primary\n  Normal  Started    2s    kubelet            Started container agnhost-primary\n"
Jul  5 19:14:46.258: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7601 describe rc agnhost-primary'
Jul  5 19:14:46.997: INFO: stderr: ""
Jul  5 19:14:46.997: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-7601\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.32\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-f4cnz\n"
Jul  5 19:14:46.997: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7601 describe service agnhost-primary'
Jul  5 19:14:47.729: INFO: stderr: ""
Jul  5 19:14:47.729: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-7601\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                100.64.170.69\nIPs:               100.64.170.69\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         100.96.3.61:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jul  5 19:14:47.842: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7601 describe node ip-172-20-36-144.eu-central-1.compute.internal'
Jul  5 19:14:48.945: INFO: stderr: ""
Jul  5 19:14:48.946: INFO: stdout: "Name:               ip-172-20-36-144.eu-central-1.compute.internal\nRoles:              node\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=t3.medium\n                    beta.kubernetes.io/os=linux\n                    failure-domain.beta.kubernetes.io/region=eu-central-1\n                    failure-domain.beta.kubernetes.io/zone=eu-central-1a\n                    kops.k8s.io/instancegroup=nodes-eu-central-1a\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=ip-172-20-36-144.eu-central-1.compute.internal\n                    kubernetes.io/os=linux\n                    kubernetes.io/role=node\n                    node-role.kubernetes.io/node=\n                    node.kubernetes.io/instance-type=t3.medium\n                    topology.ebs.csi.aws.com/zone=eu-central-1a\n                    topology.hostpath.csi/node=ip-172-20-36-144.eu-central-1.compute.internal\n                    topology.kubernetes.io/region=eu-central-1\n                    topology.kubernetes.io/zone=eu-central-1a\nAnnotations:        csi.volume.kubernetes.io/nodeid: {\"ebs.csi.aws.com\":\"i-067925adb76b0251a\"}\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 05 Jul 2021 18:40:00 +0000\nTaints:             <none>\nUnschedulable:      false\nLease:\n  HolderIdentity:  ip-172-20-36-144.eu-central-1.compute.internal\n  AcquireTime:     <unset>\n  RenewTime:       Mon, 05 Jul 2021 19:14:43 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 05 Jul 2021 19:11:34 +0000   Mon, 05 Jul 2021 18:40:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 05 Jul 2021 19:11:34 +0000   Mon, 05 Jul 2021 18:40:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 05 Jul 2021 19:11:34 +0000   Mon, 05 Jul 2021 18:40:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 05 Jul 2021 19:11:34 +0000   Mon, 05 Jul 2021 18:40:01 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   172.20.36.144\n  ExternalIP:   18.192.124.200\n  InternalDNS:  ip-172-20-36-144.eu-central-1.compute.internal\n  Hostname:     ip-172-20-36-144.eu-central-1.compute.internal\n  ExternalDNS:  ec2-18-192-124-200.eu-central-1.compute.amazonaws.com\nCapacity:\n  cpu:                2\n  ephemeral-storage:  48725632Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3968640Ki\n  pods:               110\nAllocatable:\n  cpu:                2\n  ephemeral-storage:  44905542377\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             3866240Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 ec26a0e9b6686625a154e53f3338c245\n  System UUID:                ec26a0e9-b668-6625-a154-e53f3338c245\n  Boot ID:                    71bca41e-06e9-4785-8601-cc91d7f94f33\n  Kernel Version:             5.8.0-1038-aws\n  OS Image:                   Ubuntu 20.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.6\n  Kubelet Version:            v1.22.0-beta.0\n  Kube-Proxy Version:         v1.22.0-beta.0\nPodCIDR:                      100.96.2.0/24\nPodCIDRs:                     100.96.2.0/24\nProviderID:                   aws:///eu-central-1a/i-067925adb76b0251a\nNon-terminated Pods:          (13 in total)\n  Namespace                   Name                                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                             ------------  ----------  ---------------  -------------  ---\n  deployment-7199             test-orphan-deployment-847dcfb7fb-5qlbc                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s\n  dns-4531                    e2e-configmap-dns-server-ca5435cf-53df-4b36-9818-62c85af89544    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s\n  init-container-1695         pod-init-60cf8ae7-dc88-4225-aab6-fce8fa74b226                    100m (5%)     100m (5%)   0 (0%)           0 (0%)         2s\n  kube-system                 ebs-csi-node-q6s54                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         34m\n  kube-system                 kube-proxy-ip-172-20-36-144.eu-central-1.compute.internal        100m (5%)     0 (0%)      0 (0%)           0 (0%)         33m\n  kube-system                 metrics-proxy                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         30m\n  pod-network-test-4172       netserver-0                                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s\n  pod-network-test-4172       test-container-pod                                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s\n  pv-3837                     pvc-tester-rvwrm                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s\n  pv-5435                     nfs-server                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s\n  services-7590               service-headless-r9tng                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s\n  services-7590               service-headless-toggled-dxmz5                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s\n  services-7590               verify-service-up-exec-pod-74rmb                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                200m (10%)  100m (5%)\n  memory             0 (0%)      0 (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                   Age                From        Message\n  ----     ------                   ----               ----        -------\n  Normal   Starting                 35m                kubelet     Starting kubelet.\n  Warning  InvalidDiskCapacity      35m                kubelet     invalid capacity 0 on image filesystem\n  Normal   NodeAllocatableEnforced  35m                kubelet     Updated Node Allocatable limit across pods\n  Normal   NodeHasSufficientMemory  34m (x7 over 35m)  kubelet     Node ip-172-20-36-144.eu-central-1.compute.internal status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure    34m (x7 over 35m)  kubelet     Node ip-172-20-36-144.eu-central-1.compute.internal status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID     34m (x7 over 35m)  kubelet     Node ip-172-20-36-144.eu-central-1.compute.internal status is now: NodeHasSufficientPID\n  Normal   NodeReady                34m                kubelet     Node ip-172-20-36-144.eu-central-1.compute.internal status is now: NodeReady\n  Normal   Starting                 34m                kube-proxy  Starting kube-proxy.\n"
... skipping 29 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:50.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2607" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":-1,"completed":18,"skipped":149,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:14:49.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 49 lines ...
• [SLOW TEST:6.350 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  test Deployment ReplicaSet orphaning and adoption regarding controllerRef
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef","total":-1,"completed":36,"skipped":329,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:51.605: INFO: Driver aws doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 54 lines ...
• [SLOW TEST:61.077 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":183,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:53.092: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 9 lines ...
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Driver csi-hostpath doesn't support InlineVolume -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":19,"skipped":149,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:14:50.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-a231110e-2088-4335-b090-840c1387699c
STEP: Creating a pod to test consume configMaps
Jul  5 19:14:51.559: INFO: Waiting up to 5m0s for pod "pod-configmaps-c80c7d23-d7d9-4d10-b508-d070eef056e0" in namespace "configmap-6984" to be "Succeeded or Failed"
Jul  5 19:14:51.669: INFO: Pod "pod-configmaps-c80c7d23-d7d9-4d10-b508-d070eef056e0": Phase="Pending", Reason="", readiness=false. Elapsed: 109.52682ms
Jul  5 19:14:53.778: INFO: Pod "pod-configmaps-c80c7d23-d7d9-4d10-b508-d070eef056e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219041247s
STEP: Saw pod success
Jul  5 19:14:53.778: INFO: Pod "pod-configmaps-c80c7d23-d7d9-4d10-b508-d070eef056e0" satisfied condition "Succeeded or Failed"
Jul  5 19:14:53.888: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-configmaps-c80c7d23-d7d9-4d10-b508-d070eef056e0 container agnhost-container: <nil>
STEP: delete the pod
Jul  5 19:14:54.138: INFO: Waiting for pod pod-configmaps-c80c7d23-d7d9-4d10-b508-d070eef056e0 to disappear
Jul  5 19:14:54.247: INFO: Pod pod-configmaps-c80c7d23-d7d9-4d10-b508-d070eef056e0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:54.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6984" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":149,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SSSSSSS
------------------------------
[BeforeEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:14:53.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Jul  5 19:14:53.764: INFO: Waiting up to 5m0s for pod "client-containers-ac71faf6-11c9-407e-a1a4-9e4d9ab29bbf" in namespace "containers-4482" to be "Succeeded or Failed"
Jul  5 19:14:53.873: INFO: Pod "client-containers-ac71faf6-11c9-407e-a1a4-9e4d9ab29bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 108.724119ms
Jul  5 19:14:55.982: INFO: Pod "client-containers-ac71faf6-11c9-407e-a1a4-9e4d9ab29bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218160082s
STEP: Saw pod success
Jul  5 19:14:55.983: INFO: Pod "client-containers-ac71faf6-11c9-407e-a1a4-9e4d9ab29bbf" satisfied condition "Succeeded or Failed"
Jul  5 19:14:56.091: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod client-containers-ac71faf6-11c9-407e-a1a4-9e4d9ab29bbf container agnhost-container: <nil>
STEP: delete the pod
Jul  5 19:14:56.315: INFO: Waiting for pod client-containers-ac71faf6-11c9-407e-a1a4-9e4d9ab29bbf to disappear
Jul  5 19:14:56.424: INFO: Pod client-containers-ac71faf6-11c9-407e-a1a4-9e4d9ab29bbf no longer exists
[AfterEach] [sig-node] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:14:56.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4482" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":184,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:14:56.670: INFO: Only supported for providers [gce gke] (not aws)
... skipping 63 lines ...
Jul  5 19:15:00.379: INFO: PersistentVolumeClaim pvc-q9k8q found but phase is Pending instead of Bound.
Jul  5 19:15:02.489: INFO: PersistentVolumeClaim pvc-q9k8q found and phase=Bound (10.658212768s)
Jul  5 19:15:02.489: INFO: Waiting up to 3m0s for PersistentVolume local-2wk27 to have phase Bound
Jul  5 19:15:02.598: INFO: PersistentVolume local-2wk27 found and phase=Bound (109.288605ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-gg9l
STEP: Creating a pod to test subpath
Jul  5 19:15:02.928: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-gg9l" in namespace "provisioning-976" to be "Succeeded or Failed"
Jul  5 19:15:03.037: INFO: Pod "pod-subpath-test-preprovisionedpv-gg9l": Phase="Pending", Reason="", readiness=false. Elapsed: 109.320004ms
Jul  5 19:15:05.148: INFO: Pod "pod-subpath-test-preprovisionedpv-gg9l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219796523s
Jul  5 19:15:07.259: INFO: Pod "pod-subpath-test-preprovisionedpv-gg9l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.330568587s
STEP: Saw pod success
Jul  5 19:15:07.259: INFO: Pod "pod-subpath-test-preprovisionedpv-gg9l" satisfied condition "Succeeded or Failed"
Jul  5 19:15:07.368: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-gg9l container test-container-subpath-preprovisionedpv-gg9l: <nil>
STEP: delete the pod
Jul  5 19:15:07.596: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-gg9l to disappear
Jul  5 19:15:07.705: INFO: Pod pod-subpath-test-preprovisionedpv-gg9l no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-gg9l
Jul  5 19:15:07.705: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-gg9l" in namespace "provisioning-976"
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":25,"skipped":157,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:09.246: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 44 lines ...
Jul  5 19:14:59.521: INFO: PersistentVolumeClaim pvc-nvm6r found but phase is Pending instead of Bound.
Jul  5 19:15:01.631: INFO: PersistentVolumeClaim pvc-nvm6r found and phase=Bound (4.330323206s)
Jul  5 19:15:01.631: INFO: Waiting up to 3m0s for PersistentVolume local-kn9k8 to have phase Bound
Jul  5 19:15:01.740: INFO: PersistentVolume local-kn9k8 found and phase=Bound (109.314282ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-dlpf
STEP: Creating a pod to test subpath
Jul  5 19:15:02.070: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-dlpf" in namespace "provisioning-968" to be "Succeeded or Failed"
Jul  5 19:15:02.180: INFO: Pod "pod-subpath-test-preprovisionedpv-dlpf": Phase="Pending", Reason="", readiness=false. Elapsed: 109.448295ms
Jul  5 19:15:04.290: INFO: Pod "pod-subpath-test-preprovisionedpv-dlpf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219400027s
Jul  5 19:15:06.399: INFO: Pod "pod-subpath-test-preprovisionedpv-dlpf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.329055344s
STEP: Saw pod success
Jul  5 19:15:06.399: INFO: Pod "pod-subpath-test-preprovisionedpv-dlpf" satisfied condition "Succeeded or Failed"
Jul  5 19:15:06.509: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-dlpf container test-container-subpath-preprovisionedpv-dlpf: <nil>
STEP: delete the pod
Jul  5 19:15:06.738: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-dlpf to disappear
Jul  5 19:15:06.847: INFO: Pod pod-subpath-test-preprovisionedpv-dlpf no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-dlpf
Jul  5 19:15:06.849: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-dlpf" in namespace "provisioning-968"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":37,"skipped":334,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support non-existent path
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194
Jul  5 19:15:09.829: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  5 19:15:09.829: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-6fcs
STEP: Creating a pod to test subpath
Jul  5 19:15:09.941: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-6fcs" in namespace "provisioning-8272" to be "Succeeded or Failed"
Jul  5 19:15:10.050: INFO: Pod "pod-subpath-test-inlinevolume-6fcs": Phase="Pending", Reason="", readiness=false. Elapsed: 109.222115ms
Jul  5 19:15:12.160: INFO: Pod "pod-subpath-test-inlinevolume-6fcs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219084647s
STEP: Saw pod success
Jul  5 19:15:12.160: INFO: Pod "pod-subpath-test-inlinevolume-6fcs" satisfied condition "Succeeded or Failed"
Jul  5 19:15:12.270: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-6fcs container test-container-volume-inlinevolume-6fcs: <nil>
STEP: delete the pod
Jul  5 19:15:12.557: INFO: Waiting for pod pod-subpath-test-inlinevolume-6fcs to disappear
Jul  5 19:15:12.666: INFO: Pod pod-subpath-test-inlinevolume-6fcs no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-6fcs
Jul  5 19:15:12.666: INFO: Deleting pod "pod-subpath-test-inlinevolume-6fcs" in namespace "provisioning-8272"
... skipping 3 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:15:12.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-8272" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":26,"skipped":166,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:13.115: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 142 lines ...
Jul  5 19:09:46.287: INFO: PersistentVolumeClaim pvc-qftff found and phase=Bound (14.880596602s)
Jul  5 19:09:46.287: INFO: Waiting up to 3m0s for PersistentVolume nfs-l5q62 to have phase Bound
Jul  5 19:09:46.398: INFO: PersistentVolume nfs-l5q62 found and phase=Bound (111.859988ms)
STEP: Checking pod has write access to PersistentVolume
Jul  5 19:09:46.616: INFO: Creating nfs test pod
Jul  5 19:09:46.730: INFO: Pod should terminate with exitcode 0 (success)
Jul  5 19:09:46.730: INFO: Waiting up to 5m0s for pod "pvc-tester-rvwrm" in namespace "pv-3837" to be "Succeeded or Failed"
Jul  5 19:09:46.839: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 109.12486ms
Jul  5 19:09:48.951: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221287431s
Jul  5 19:09:51.062: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331831061s
Jul  5 19:09:53.172: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442381438s
Jul  5 19:09:55.283: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552897031s
Jul  5 19:09:57.394: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663447721s
... skipping 133 lines ...
Jul  5 19:14:40.194: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.463724345s
Jul  5 19:14:42.304: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.574298548s
Jul  5 19:14:44.415: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.684601643s
Jul  5 19:14:46.525: INFO: Pod "pvc-tester-rvwrm": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.795413174s
Jul  5 19:14:48.526: INFO: Deleting pod "pvc-tester-rvwrm" in namespace "pv-3837"
Jul  5 19:14:48.662: INFO: Wait up to 5m0s for pod "pvc-tester-rvwrm" to be fully deleted
Jul  5 19:15:00.882: FAIL: Unexpected error:
    <*errors.errorString | 0xc00306ebb0>: {
        s: "pod \"pvc-tester-rvwrm\" did not exit with Success: pod \"pvc-tester-rvwrm\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-rvwrm\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-rvwrm" did not exit with Success: pod "pvc-tester-rvwrm" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-rvwrm" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.completeTest(0xc003656000, 0x78a18a8, 0xc003f2c000, 0xc00266d679, 0x7, 0xc002d8d180, 0xc0019de8c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52 +0x19c
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.3.5()
... skipping 22 lines ...
Jul  5 19:15:11.548: INFO: At 2021-07-05 19:09:28 +0000 UTC - event for nfs-server: {default-scheduler } Scheduled: Successfully assigned pv-3837/nfs-server to ip-172-20-59-37.eu-central-1.compute.internal
Jul  5 19:15:11.548: INFO: At 2021-07-05 19:09:28 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Jul  5 19:15:11.548: INFO: At 2021-07-05 19:09:28 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Created: Created container nfs-server
Jul  5 19:15:11.548: INFO: At 2021-07-05 19:09:28 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Started: Started container nfs-server
Jul  5 19:15:11.548: INFO: At 2021-07-05 19:09:46 +0000 UTC - event for pvc-tester-rvwrm: {default-scheduler } Scheduled: Successfully assigned pv-3837/pvc-tester-rvwrm to ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:15:11.548: INFO: At 2021-07-05 19:11:49 +0000 UTC - event for pvc-tester-rvwrm: {kubelet ip-172-20-36-144.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-qzf9k]: timed out waiting for the condition
Jul  5 19:15:11.548: INFO: At 2021-07-05 19:12:47 +0000 UTC - event for pvc-tester-rvwrm: {kubelet ip-172-20-36-144.eu-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-l5q62" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.4.246:/exports /var/lib/kubelet/pods/31ebd047-0789-4db7-b421-fe0c0ee6c135/volumes/kubernetes.io~nfs/nfs-l5q62
Output: mount.nfs: Connection timed out

Jul  5 19:15:11.548: INFO: At 2021-07-05 19:15:01 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Killing: Stopping container nfs-server
Jul  5 19:15:11.657: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 202 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:196

      Jul  5 19:15:00.882: Unexpected error:
          <*errors.errorString | 0xc00306ebb0>: {
              s: "pod \"pvc-tester-rvwrm\" did not exit with Success: pod \"pvc-tester-rvwrm\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-rvwrm\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-rvwrm" did not exit with Success: pod "pvc-tester-rvwrm" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-rvwrm" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":25,"skipped":165,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume","[sig-network] DNS should provide DNS for services  [Conformance]","[sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
... skipping 7 lines ...
Jul  5 19:10:10.558: INFO: Creating resource for dynamic PV
Jul  5 19:10:10.558: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass fsgroupchangepolicy-1865rdjzt
STEP: creating a claim
Jul  5 19:10:10.667: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating Pod in namespace fsgroupchangepolicy-1865 with fsgroup 1000
Jul  5 19:15:11.335: FAIL: Unexpected error:
    <*errors.errorString | 0xc003bcb110>: {
        s: "pod \"pod-6ae14f22-88c2-4769-beae-a112fdfaad08\" is not Running: timed out waiting for the condition",
    }
    pod "pod-6ae14f22-88c2-4769-beae-a112fdfaad08" is not Running: timed out waiting for the condition
occurred

... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "fsgroupchangepolicy-1865".
STEP: Found 5 events.
Jul  5 19:15:11.775: INFO: At 2021-07-05 19:10:10 +0000 UTC - event for awsrvvlv: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:15:11.775: INFO: At 2021-07-05 19:10:10 +0000 UTC - event for awsrvvlv: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "fsgroupchangepolicy-1865/awsrvvlv"
Jul  5 19:15:11.775: INFO: At 2021-07-05 19:10:10 +0000 UTC - event for awsrvvlv: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:15:11.775: INFO: At 2021-07-05 19:10:20 +0000 UTC - event for awsrvvlv: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-1865rdjzt": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:15:11.775: INFO: At 2021-07-05 19:10:30 +0000 UTC - event for awsrvvlv: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-1865rdjzt": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:15:11.884: INFO: POD                                       NODE  PHASE    GRACE  CONDITIONS
Jul  5 19:15:11.884: INFO: pod-6ae14f22-88c2-4769-beae-a112fdfaad08        Pending         []
Jul  5 19:15:11.884: INFO: 
Jul  5 19:15:11.993: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 199 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Jul  5 19:15:11.335: Unexpected error:
          <*errors.errorString | 0xc003bcb110>: {
              s: "pod \"pod-6ae14f22-88c2-4769-beae-a112fdfaad08\" is not Running: timed out waiting for the condition",
          }
          pod "pod-6ae14f22-88c2-4769-beae-a112fdfaad08" is not Running: timed out waiting for the condition
      occurred

... skipping 9 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Jul  5 19:12:23.308: INFO: Creating ReplicaSet my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4
Jul  5 19:12:23.529: INFO: Pod name my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4: Found 1 pods out of 1
Jul  5 19:12:23.529: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4" is running
Jul  5 19:12:25.749: INFO: Pod "my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-05 19:12:23 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-05 19:12:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-05 19:12:23 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-07-05 19:12:23 +0000 UTC Reason: Message:}])
Jul  5 19:12:25.749: INFO: Trying to dial the pod
Jul  5 19:13:01.081: INFO: Controller my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4: Failed to GET from replica 1 [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67]: the server is currently unable to handle the request (get pods my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.60.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc0036ffb78), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004bc3540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004bc0b3d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 19:13:36.080: INFO: Controller my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4: Failed to GET from replica 1 [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67]: the server is currently unable to handle the request (get pods my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.60.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc0036ffb78), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004bc3540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004bc0b3d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 19:14:11.080: INFO: Controller my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4: Failed to GET from replica 1 [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67]: the server is currently unable to handle the request (get pods my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.60.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc0036ffb78), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004bc3540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004bc0b3d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 19:14:46.094: INFO: Controller my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4: Failed to GET from replica 1 [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67]: the server is currently unable to handle the request (get pods my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.60.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc0036ffb78), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004bc3540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004bc0b3d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 19:15:16.430: INFO: Controller my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4: Failed to GET from replica 1 [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67]: the server is currently unable to handle the request (get pods my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4-zcp67)
pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109143, loc:(*time.Location)(0x9f895a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.20.60.158", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(0xc0036ffb78), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-7ff1059f-e3bc-4f7b-a87d-8d0f5ab7b6d4", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004bc3540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.32", ImageID:"", ContainerID:"", Started:(*bool)(0xc004bc0b3d)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}
Jul  5 19:15:16.430: FAIL: Did not get expected responses within the timeout period of 120.00 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func8.1()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110 +0x57
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000922900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 229 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 19:15:16.430: Did not get expected responses within the timeout period of 120.00 seconds.

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110
------------------------------
{"msg":"FAILED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":28,"skipped":275,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

S
------------------------------
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
STEP: Verifying customized DNS option is configured on pod...
Jul  5 19:11:45.075: INFO: ExecWithOptions {Command:[cat /etc/resolv.conf] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:11:45.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Verifying customized name server and search path are working...
Jul  5 19:11:45.844: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:11:45.844: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:12:01.653: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:12:06.654: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:12:06.654: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:12:22.410: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:12:26.653: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:12:26.653: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:12:42.426: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:12:46.656: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:12:46.656: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:13:02.417: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:13:06.653: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:13:06.653: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:13:22.395: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:13:26.653: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:13:26.653: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:13:42.386: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:13:46.655: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:13:46.655: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:14:02.406: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:14:06.653: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:14:06.654: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:14:22.395: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:14:26.653: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:14:26.653: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:14:42.423: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:14:46.653: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:14:46.654: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:15:02.387: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:15:02.387: INFO: ExecWithOptions {Command:[dig +short +search notexistname] Namespace:dns-4531 PodName:e2e-dns-utils ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Jul  5 19:15:02.387: INFO: >>> kubeConfig: /root/.kube/config
Jul  5 19:15:18.139: INFO: ginkgo.Failed to execute dig command, stdout:;; connection timed out; no servers could be reached, stderr: , err: command terminated with exit code 9
Jul  5 19:15:18.140: FAIL: failed to verify customized name server and search path
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 236 lines ...
• Failure [229.221 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should support configurable pod resolv.conf [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:458

  Jul  5 19:15:18.140: failed to verify customized name server and search path
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:565
------------------------------
{"msg":"FAILED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":38,"skipped":329,"failed":3,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] DNS should support configurable pod resolv.conf"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:22.853: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] capacity
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 22 lines ...
Jul  5 19:15:14.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support existing single file [LinuxOnly]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
Jul  5 19:15:14.572: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  5 19:15:14.794: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1176" in namespace "provisioning-1176" to be "Succeeded or Failed"
Jul  5 19:15:14.904: INFO: Pod "hostpath-symlink-prep-provisioning-1176": Phase="Pending", Reason="", readiness=false. Elapsed: 109.366756ms
Jul  5 19:15:17.013: INFO: Pod "hostpath-symlink-prep-provisioning-1176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.219307852s
STEP: Saw pod success
Jul  5 19:15:17.014: INFO: Pod "hostpath-symlink-prep-provisioning-1176" satisfied condition "Succeeded or Failed"
Jul  5 19:15:17.014: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1176" in namespace "provisioning-1176"
Jul  5 19:15:17.130: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1176" to be fully deleted
Jul  5 19:15:17.239: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-vwmj
STEP: Creating a pod to test subpath
Jul  5 19:15:17.350: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-vwmj" in namespace "provisioning-1176" to be "Succeeded or Failed"
Jul  5 19:15:17.459: INFO: Pod "pod-subpath-test-inlinevolume-vwmj": Phase="Pending", Reason="", readiness=false. Elapsed: 109.280928ms
Jul  5 19:15:19.569: INFO: Pod "pod-subpath-test-inlinevolume-vwmj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218972927s
STEP: Saw pod success
Jul  5 19:15:19.569: INFO: Pod "pod-subpath-test-inlinevolume-vwmj" satisfied condition "Succeeded or Failed"
Jul  5 19:15:19.678: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-inlinevolume-vwmj container test-container-subpath-inlinevolume-vwmj: <nil>
STEP: delete the pod
Jul  5 19:15:19.907: INFO: Waiting for pod pod-subpath-test-inlinevolume-vwmj to disappear
Jul  5 19:15:20.015: INFO: Pod pod-subpath-test-inlinevolume-vwmj no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-vwmj
Jul  5 19:15:20.015: INFO: Deleting pod "pod-subpath-test-inlinevolume-vwmj" in namespace "provisioning-1176"
STEP: Deleting pod
Jul  5 19:15:20.124: INFO: Deleting pod "pod-subpath-test-inlinevolume-vwmj" in namespace "provisioning-1176"
Jul  5 19:15:20.343: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-1176" in namespace "provisioning-1176" to be "Succeeded or Failed"
Jul  5 19:15:20.451: INFO: Pod "hostpath-symlink-prep-provisioning-1176": Phase="Pending", Reason="", readiness=false. Elapsed: 108.722459ms
Jul  5 19:15:22.561: INFO: Pod "hostpath-symlink-prep-provisioning-1176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218382921s
STEP: Saw pod success
Jul  5 19:15:22.561: INFO: Pod "hostpath-symlink-prep-provisioning-1176" satisfied condition "Succeeded or Failed"
Jul  5 19:15:22.561: INFO: Deleting pod "hostpath-symlink-prep-provisioning-1176" in namespace "provisioning-1176"
Jul  5 19:15:22.675: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-1176" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:15:22.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-1176" for this suite.
... skipping 6 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":27,"skipped":192,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:23.016: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186

... skipping 37 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:15:23.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5594" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob","total":-1,"completed":29,"skipped":276,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

SSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:23.493: INFO: Only supported for providers [gce gke] (not aws)
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Only supported for providers [gce gke] (not aws)

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1302
------------------------------
... skipping 203 lines ...
• [SLOW TEST:12.547 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":28,"skipped":224,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

SS
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents","total":-1,"completed":25,"skipped":176,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] Mounted volume expand Should verify mounted devices can be resized","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents"]}
[BeforeEach] [sig-node] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:15:15.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 16 lines ...
• [SLOW TEST:22.417 seconds]
[sig-node] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":176,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] Mounted volume expand Should verify mounted devices can be resized","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents"]}

SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":20,"skipped":252,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]"]}
[BeforeEach] [sig-storage] PersistentVolumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:10:05.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
Jul  5 19:10:18.042: INFO: PersistentVolumeClaim pvc-d7rrw found and phase=Bound (8.548069883s)
Jul  5 19:10:18.042: INFO: Waiting up to 3m0s for PersistentVolume nfs-6khhq to have phase Bound
Jul  5 19:10:18.150: INFO: PersistentVolume nfs-6khhq found and phase=Bound (108.429218ms)
STEP: Checking pod has write access to PersistentVolume
Jul  5 19:10:18.370: INFO: Creating nfs test pod
Jul  5 19:10:18.480: INFO: Pod should terminate with exitcode 0 (success)
Jul  5 19:10:18.480: INFO: Waiting up to 5m0s for pod "pvc-tester-qk4rt" in namespace "pv-6804" to be "Succeeded or Failed"
Jul  5 19:10:18.589: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 108.982105ms
Jul  5 19:10:20.700: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219078755s
Jul  5 19:10:22.809: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328634367s
Jul  5 19:10:24.918: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43771632s
Jul  5 19:10:27.029: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548475576s
Jul  5 19:10:29.138: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657403415s
... skipping 133 lines ...
Jul  5 19:15:11.878: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.397406253s
Jul  5 19:15:13.994: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.513627037s
Jul  5 19:15:16.104: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.623137071s
Jul  5 19:15:18.215: INFO: Pod "pvc-tester-qk4rt": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.734271027s
Jul  5 19:15:20.216: INFO: Deleting pod "pvc-tester-qk4rt" in namespace "pv-6804"
Jul  5 19:15:20.329: INFO: Wait up to 5m0s for pod "pvc-tester-qk4rt" to be fully deleted
Jul  5 19:15:30.549: FAIL: Unexpected error:
    <*errors.errorString | 0xc003ca6370>: {
        s: "pod \"pvc-tester-qk4rt\" did not exit with Success: pod \"pvc-tester-qk4rt\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-qk4rt\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-qk4rt" did not exit with Success: pod "pvc-tester-qk4rt" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-qk4rt" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.completeTest(0xc003a04160, 0x78a18a8, 0xc002e54160, 0xc00425c479, 0x7, 0xc003996500, 0xc000310700)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52 +0x19c
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.3.4()
... skipping 23 lines ...
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:10:06 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:10:06 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Created: Created container nfs-server
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:10:07 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Started: Started container nfs-server
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:10:09 +0000 UTC - event for pvc-d7rrw: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:10:18 +0000 UTC - event for pvc-tester-qk4rt: {default-scheduler } Scheduled: Successfully assigned pv-6804/pvc-tester-qk4rt to ip-172-20-47-191.eu-central-1.compute.internal
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:12:21 +0000 UTC - event for pvc-tester-qk4rt: {kubelet ip-172-20-47-191.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[kube-api-access-twb9t volume1]: timed out waiting for the condition
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:13:20 +0000 UTC - event for pvc-tester-qk4rt: {kubelet ip-172-20-47-191.eu-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-6khhq" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.4.252:/exports /var/lib/kubelet/pods/784006b9-2784-4fb3-987e-506d477d3b89/volumes/kubernetes.io~nfs/nfs-6khhq
Output: mount.nfs: Connection timed out

Jul  5 19:15:35.231: INFO: At 2021-07-05 19:14:38 +0000 UTC - event for pvc-tester-qk4rt: {kubelet ip-172-20-47-191.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-twb9t]: timed out waiting for the condition
Jul  5 19:15:35.231: INFO: At 2021-07-05 19:15:30 +0000 UTC - event for nfs-server: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} Killing: Stopping container nfs-server
... skipping 199 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      create a PVC and a pre-bound PV: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:187

      Jul  5 19:15:30.549: Unexpected error:
          <*errors.errorString | 0xc003ca6370>: {
              s: "pod \"pvc-tester-qk4rt\" did not exit with Success: pod \"pvc-tester-qk4rt\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-qk4rt\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-qk4rt" did not exit with Success: pod "pvc-tester-qk4rt" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-qk4rt" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access","total":-1,"completed":20,"skipped":252,"failed":5,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:39.665: INFO: Only supported for providers [azure] (not aws)
... skipping 81 lines ...
Jul  5 19:13:37.928: INFO: Running '/tmp/kubectl1184884490/kubectl --server=https://api.e2e-cbaca3167c-c6e4d.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=services-7590 exec verify-service-up-host-exec-pod -- /bin/sh -x -c for i in $(seq 1 150); do wget -q -T 1 -O - http://100.68.156.12:80 2>&1 || true; echo; done'
Jul  5 19:15:30.235: INFO: stderr: "+ seq 1 150\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n+ wget -q -T 1 -O - http://100.68.156.12:80\n+ true\n+ echo\n"
Jul  5 19:15:30.235: INFO: stdout: "wget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nwget: download timed out\n\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nservice-headless-toggled-57fp8\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nservice-headless-toggled-57fp8\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\nwget: download timed out\n\n"
Jul  5 19:15:30.235: INFO: Unable to reach the following endpoints of service 100.68.156.12: map[service-headless-toggled-dxmz5:{} service-headless-toggled-nrgwn:{}]
STEP: Deleting pod verify-service-up-host-exec-pod in namespace services-7590
STEP: Deleting pod verify-service-up-exec-pod-74rmb in namespace services-7590
Jul  5 19:15:35.465: FAIL: Unexpected error:
    <*errors.errorString | 0xc00290c0b0>: {
        s: "service verification failed for: 100.68.156.12\nexpected [service-headless-toggled-57fp8 service-headless-toggled-dxmz5 service-headless-toggled-nrgwn]\nreceived [service-headless-toggled-57fp8 wget: download timed out]",
    }
    service verification failed for: 100.68.156.12
    expected [service-headless-toggled-57fp8 service-headless-toggled-dxmz5 service-headless-toggled-nrgwn]
    received [service-headless-toggled-57fp8 wget: download timed out]
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/network.glob..func24.29()
... skipping 256 lines ...
• Failure [359.724 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should implement service.kubernetes.io/headless [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1934

  Jul  5 19:15:35.465: Unexpected error:
      <*errors.errorString | 0xc00290c0b0>: {
          s: "service verification failed for: 100.68.156.12\nexpected [service-headless-toggled-57fp8 service-headless-toggled-dxmz5 service-headless-toggled-nrgwn]\nreceived [service-headless-toggled-57fp8 wget: download timed out]",
      }
      service verification failed for: 100.68.156.12
      expected [service-headless-toggled-57fp8 service-headless-toggled-dxmz5 service-headless-toggled-nrgwn]
      received [service-headless-toggled-57fp8 wget: download timed out]
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1959
------------------------------
{"msg":"FAILED [sig-network] Services should implement service.kubernetes.io/headless","total":-1,"completed":20,"skipped":143,"failed":4,"failures":["[sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless"]}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:15:40.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2268" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":21,"skipped":260,"failed":5,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:40.701: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 43 lines ...
Jul  5 19:15:29.443: INFO: PersistentVolumeClaim pvc-ggkqj found but phase is Pending instead of Bound.
Jul  5 19:15:31.553: INFO: PersistentVolumeClaim pvc-ggkqj found and phase=Bound (2.219818134s)
Jul  5 19:15:31.553: INFO: Waiting up to 3m0s for PersistentVolume local-h9whx to have phase Bound
Jul  5 19:15:31.662: INFO: PersistentVolume local-h9whx found and phase=Bound (109.847057ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-trdk
STEP: Creating a pod to test subpath
Jul  5 19:15:31.993: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-trdk" in namespace "provisioning-605" to be "Succeeded or Failed"
Jul  5 19:15:32.103: INFO: Pod "pod-subpath-test-preprovisionedpv-trdk": Phase="Pending", Reason="", readiness=false. Elapsed: 109.794751ms
Jul  5 19:15:34.214: INFO: Pod "pod-subpath-test-preprovisionedpv-trdk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220394044s
Jul  5 19:15:36.325: INFO: Pod "pod-subpath-test-preprovisionedpv-trdk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.331260992s
STEP: Saw pod success
Jul  5 19:15:36.325: INFO: Pod "pod-subpath-test-preprovisionedpv-trdk" satisfied condition "Succeeded or Failed"
Jul  5 19:15:36.434: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-trdk container test-container-subpath-preprovisionedpv-trdk: <nil>
STEP: delete the pod
Jul  5 19:15:36.664: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-trdk to disappear
Jul  5 19:15:36.777: INFO: Pod pod-subpath-test-preprovisionedpv-trdk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-trdk
Jul  5 19:15:36.777: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-trdk" in namespace "provisioning-605"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly file specified in the volumeMount [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:379
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":30,"skipped":297,"failed":7,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Jul  5 19:15:40.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-924cfa90-e557-40f2-a7e4-94395be94de5" in namespace "downward-api-8940" to be "Succeeded or Failed"
Jul  5 19:15:40.683: INFO: Pod "downwardapi-volume-924cfa90-e557-40f2-a7e4-94395be94de5": Phase="Pending", Reason="", readiness=false. Elapsed: 108.697538ms
Jul  5 19:15:42.794: INFO: Pod "downwardapi-volume-924cfa90-e557-40f2-a7e4-94395be94de5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218843153s
STEP: Saw pod success
Jul  5 19:15:42.794: INFO: Pod "downwardapi-volume-924cfa90-e557-40f2-a7e4-94395be94de5" satisfied condition "Succeeded or Failed"
Jul  5 19:15:42.902: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod downwardapi-volume-924cfa90-e557-40f2-a7e4-94395be94de5 container client-container: <nil>
STEP: delete the pod
Jul  5 19:15:43.125: INFO: Waiting for pod downwardapi-volume-924cfa90-e557-40f2-a7e4-94395be94de5 to disappear
Jul  5 19:15:43.234: INFO: Pod downwardapi-volume-924cfa90-e557-40f2-a7e4-94395be94de5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:15:43.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8940" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":144,"failed":4,"failures":["[sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:43.471: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 91 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":29,"skipped":226,"failed":5,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 41 lines ...
Jul  5 19:15:15.496: INFO: creating *v1.CSIDriver: csi-mock-csi-mock-volumes-356
Jul  5 19:15:15.607: INFO: creating *v1.StatefulSet: csi-mock-volumes-356-9161/csi-mockplugin-attacher
Jul  5 19:15:15.718: INFO: waiting up to 4m0s for CSIDriver "csi-mock-csi-mock-volumes-356"
Jul  5 19:15:15.827: INFO: waiting for CSIDriver csi-mock-csi-mock-volumes-356 to register on node ip-172-20-60-158.eu-central-1.compute.internal
STEP: Creating pod
STEP: checking for CSIInlineVolumes feature
Jul  5 19:15:20.607: INFO: Error getting logs for pod inline-volume-7g7hq: the server rejected our request for an unknown reason (get pods inline-volume-7g7hq)
Jul  5 19:15:20.717: INFO: Deleting pod "inline-volume-7g7hq" in namespace "csi-mock-volumes-356"
Jul  5 19:15:20.828: INFO: Wait up to 5m0s for pod "inline-volume-7g7hq" to be fully deleted
STEP: Deleting the previously created pod
Jul  5 19:15:31.046: INFO: Deleting pod "pvc-volume-tester-49npb" in namespace "csi-mock-volumes-356"
Jul  5 19:15:31.159: INFO: Wait up to 5m0s for pod "pvc-volume-tester-49npb" to be fully deleted
STEP: Checking CSI driver logs
Jul  5 19:15:33.493: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-49npb
Jul  5 19:15:33.493: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-356
Jul  5 19:15:33.493: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: f75f9987-c7de-41a2-9f96-aa9a5507c980
Jul  5 19:15:33.493: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jul  5 19:15:33.493: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: true
Jul  5 19:15:33.493: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"csi-d757c3c195be603513229a469c0df8077854edaaf7b7b9d21750a216c18f32ff","target_path":"/var/lib/kubelet/pods/f75f9987-c7de-41a2-9f96-aa9a5507c980/volumes/kubernetes.io~csi/my-volume/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-49npb
Jul  5 19:15:33.493: INFO: Deleting pod "pvc-volume-tester-49npb" in namespace "csi-mock-volumes-356"
STEP: Cleaning up resources
STEP: deleting the test namespace: csi-mock-volumes-356
STEP: Waiting for namespaces [csi-mock-volumes-356] to vanish
STEP: uninstalling csi mock driver
... skipping 40 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    contain ephemeral=true when using inline volume
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume","total":-1,"completed":38,"skipped":341,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:55.737: INFO: Only supported for providers [azure] (not aws)
... skipping 158 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:192
    Two pods mounting a local volume one after the other
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:254
      should be able to write from pod1 and read from pod2
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:255
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":22,"skipped":264,"failed":5,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:56.418: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (delayed binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":16,"skipped":196,"failed":4,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents"]}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:14:50.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 4 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul  5 19:14:52.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109292, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109292, loc:(*time.Location)(0x9f895a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109292, loc:(*time.Location)(0x9f895a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63761109292, loc:(*time.Location)(0x9f895a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  5 19:14:55.871: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
Jul  5 19:15:06.308: INFO: Waiting for webhook configuration to be ready...
Jul  5 19:15:16.627: INFO: Waiting for webhook configuration to be ready...
Jul  5 19:15:26.928: INFO: Waiting for webhook configuration to be ready...
Jul  5 19:15:37.230: INFO: Waiting for webhook configuration to be ready...
Jul  5 19:15:47.450: INFO: Waiting for webhook configuration to be ready...
Jul  5 19:15:47.451: FAIL: waiting for webhook configuration to be ready
Unexpected error:
    <*errors.errorString | 0xc000248250>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 432 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102


• Failure [65.890 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 19:15:47.451: waiting for webhook configuration to be ready
  Unexpected error:
      <*errors.errorString | 0xc000248250>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

... skipping 10 lines ...
[sig-storage] In-tree Volumes
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (immediate binding)] topology
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/topology.go:192

      Driver local doesn't support DynamicPV -- skipping

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:116
------------------------------
{"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":16,"skipped":196,"failed":5,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:15:55.862: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  5 19:15:56.523: INFO: Waiting up to 5m0s for pod "pod-5757e080-9da7-4bf1-9472-cbcce0401cba" in namespace "emptydir-7728" to be "Succeeded or Failed"
Jul  5 19:15:56.632: INFO: Pod "pod-5757e080-9da7-4bf1-9472-cbcce0401cba": Phase="Pending", Reason="", readiness=false. Elapsed: 108.964777ms
Jul  5 19:15:58.741: INFO: Pod "pod-5757e080-9da7-4bf1-9472-cbcce0401cba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218647316s
STEP: Saw pod success
Jul  5 19:15:58.741: INFO: Pod "pod-5757e080-9da7-4bf1-9472-cbcce0401cba" satisfied condition "Succeeded or Failed"
Jul  5 19:15:58.850: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-5757e080-9da7-4bf1-9472-cbcce0401cba container test-container: <nil>
STEP: delete the pod
Jul  5 19:15:59.075: INFO: Waiting for pod pod-5757e080-9da7-4bf1-9472-cbcce0401cba to disappear
Jul  5 19:15:59.185: INFO: Pod pod-5757e080-9da7-4bf1-9472-cbcce0401cba no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:15:59.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7728" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":373,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:15:59.444: INFO: Only supported for providers [gce gke] (not aws)
... skipping 42 lines ...
Jul  5 19:15:56.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Jul  5 19:15:57.107: INFO: Waiting up to 5m0s for pod "downward-api-b6781290-756e-4e42-a12e-79e94ec01f77" in namespace "downward-api-9159" to be "Succeeded or Failed"
Jul  5 19:15:57.216: INFO: Pod "downward-api-b6781290-756e-4e42-a12e-79e94ec01f77": Phase="Pending", Reason="", readiness=false. Elapsed: 108.760332ms
Jul  5 19:15:59.325: INFO: Pod "downward-api-b6781290-756e-4e42-a12e-79e94ec01f77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.218259105s
STEP: Saw pod success
Jul  5 19:15:59.325: INFO: Pod "downward-api-b6781290-756e-4e42-a12e-79e94ec01f77" satisfied condition "Succeeded or Failed"
Jul  5 19:15:59.434: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod downward-api-b6781290-756e-4e42-a12e-79e94ec01f77 container dapi-container: <nil>
STEP: delete the pod
Jul  5 19:15:59.659: INFO: Waiting for pod downward-api-b6781290-756e-4e42-a12e-79e94ec01f77 to disappear
Jul  5 19:15:59.767: INFO: Pod downward-api-b6781290-756e-4e42-a12e-79e94ec01f77 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:15:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9159" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":270,"failed":5,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume","[sig-network] Services should be able to create a functioning NodePort service [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:16:00.014: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 40 lines ...
• [SLOW TEST:67.451 seconds]
[sig-apps] CronJob
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":21,"skipped":156,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
[BeforeEach] [sig-node] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:16:01.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 14 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when scheduling a busybox command that always fails in a pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":156,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:16:07.097: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 72 lines ...
STEP: create the rc
STEP: delete the rc
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0705 19:11:13.734999   12537 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0705 19:11:13.735074   12537 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  5 19:16:13.954: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul  5 19:16:13.954: INFO: Deleting pod "simpletest.rc-7vq2p" in namespace "gc-1378"
Jul  5 19:16:14.070: INFO: Deleting pod "simpletest.rc-xrggn" in namespace "gc-1378"
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:16:14.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1378" for this suite.
... skipping 2 lines ...
• [SLOW TEST:336.778 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if deleteOptions.OrphanDependents is nil
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/garbage_collector.go:449
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil","total":-1,"completed":15,"skipped":166,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data"]}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:16:14.440: INFO: Only supported for providers [gce gke] (not aws)
... skipping 135 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should validate Statefulset Status endpoints
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:957
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints","total":-1,"completed":22,"skipped":152,"failed":4,"failures":["[sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:16:16.859: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-139b9399-043c-4bd7-92b5-9f3a1e37ff73
STEP: Creating a pod to test consume configMaps
Jul  5 19:16:17.642: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bbe70a1-e55a-48bc-beba-cc21534bf88e" in namespace "configmap-2805" to be "Succeeded or Failed"
Jul  5 19:16:17.751: INFO: Pod "pod-configmaps-0bbe70a1-e55a-48bc-beba-cc21534bf88e": Phase="Pending", Reason="", readiness=false. Elapsed: 108.521266ms
Jul  5 19:16:19.860: INFO: Pod "pod-configmaps-0bbe70a1-e55a-48bc-beba-cc21534bf88e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21790034s
STEP: Saw pod success
Jul  5 19:16:19.860: INFO: Pod "pod-configmaps-0bbe70a1-e55a-48bc-beba-cc21534bf88e" satisfied condition "Succeeded or Failed"
Jul  5 19:16:19.969: INFO: Trying to get logs from node ip-172-20-60-158.eu-central-1.compute.internal pod pod-configmaps-0bbe70a1-e55a-48bc-beba-cc21534bf88e container agnhost-container: <nil>
STEP: delete the pod
Jul  5 19:16:20.195: INFO: Waiting for pod pod-configmaps-0bbe70a1-e55a-48bc-beba-cc21534bf88e to disappear
Jul  5 19:16:20.303: INFO: Pod pod-configmaps-0bbe70a1-e55a-48bc-beba-cc21534bf88e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Jul  5 19:16:20.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2805" for this suite.

•
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":163,"failed":4,"failures":["[sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless"]}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:16:20.573: INFO: Only supported for providers [gce gke] (not aws)
... skipping 87 lines ...
Jul  5 19:16:14.090: INFO: PersistentVolumeClaim pvc-c4xl7 found but phase is Pending instead of Bound.
Jul  5 19:16:16.200: INFO: PersistentVolumeClaim pvc-c4xl7 found and phase=Bound (12.77014414s)
Jul  5 19:16:16.200: INFO: Waiting up to 3m0s for PersistentVolume local-cwjr2 to have phase Bound
Jul  5 19:16:16.309: INFO: PersistentVolume local-cwjr2 found and phase=Bound (109.09747ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9drx
STEP: Creating a pod to test subpath
Jul  5 19:16:16.639: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9drx" in namespace "provisioning-9954" to be "Succeeded or Failed"
Jul  5 19:16:16.749: INFO: Pod "pod-subpath-test-preprovisionedpv-9drx": Phase="Pending", Reason="", readiness=false. Elapsed: 109.196518ms
Jul  5 19:16:18.859: INFO: Pod "pod-subpath-test-preprovisionedpv-9drx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21944742s
STEP: Saw pod success
Jul  5 19:16:18.859: INFO: Pod "pod-subpath-test-preprovisionedpv-9drx" satisfied condition "Succeeded or Failed"
Jul  5 19:16:18.968: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-9drx container test-container-volume-preprovisionedpv-9drx: <nil>
STEP: delete the pod
Jul  5 19:16:19.196: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9drx to disappear
Jul  5 19:16:19.305: INFO: Pod pod-subpath-test-preprovisionedpv-9drx no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9drx
Jul  5 19:16:19.305: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9drx" in namespace "provisioning-9954"
... skipping 36 lines ...
Jul  5 19:11:18.919: INFO: Creating resource for dynamic PV
Jul  5 19:11:18.919: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-9882m2t7q
STEP: creating a claim
Jul  5 19:11:19.042: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating a pod with dynamically provisioned volume
Jul  5 19:16:19.699: FAIL: While creating pods for resizing
Unexpected error:
    <*errors.errorString | 0xc00230eee0>: {
        s: "pod \"pod-9cd48891-89d3-46cf-a5ea-2ca7b9cb74b6\" is not Running: timed out waiting for the condition",
    }
    pod "pod-9cd48891-89d3-46cf-a5ea-2ca7b9cb74b6" is not Running: timed out waiting for the condition
occurred

... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-expand-9882".
STEP: Found 6 events.
Jul  5 19:16:20.470: INFO: At 2021-07-05 19:11:19 +0000 UTC - event for awsl6trf: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:16:20.470: INFO: At 2021-07-05 19:11:19 +0000 UTC - event for awsl6trf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "volume-expand-9882/awsl6trf"
Jul  5 19:16:20.470: INFO: At 2021-07-05 19:11:19 +0000 UTC - event for awsl6trf: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:16:20.470: INFO: At 2021-07-05 19:11:29 +0000 UTC - event for awsl6trf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-9882m2t7q": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:16:20.470: INFO: At 2021-07-05 19:11:40 +0000 UTC - event for awsl6trf: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-9882m2t7q": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:16:20.470: INFO: At 2021-07-05 19:16:20 +0000 UTC - event for pod-9cd48891-89d3-46cf-a5ea-2ca7b9cb74b6: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: pod does not exist any more: pod "pod-9cd48891-89d3-46cf-a5ea-2ca7b9cb74b6" not found
Jul  5 19:16:20.579: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:16:20.579: INFO: 
Jul  5 19:16:20.689: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 198 lines ...
    [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Jul  5 19:16:19.699: While creating pods for resizing
      Unexpected error:
          <*errors.errorString | 0xc00230eee0>: {
              s: "pod \"pod-9cd48891-89d3-46cf-a5ea-2ca7b9cb74b6\" is not Running: timed out waiting for the condition",
          }
          pod "pod-9cd48891-89d3-46cf-a5ea-2ca7b9cb74b6" is not Running: timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:268
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":19,"skipped":118,"failed":3,"failures":["[sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
Jul  5 19:16:24.835: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 59 lines ...
Jul  5 19:16:14.976: INFO: PersistentVolumeClaim pvc-7gptl found but phase is Pending instead of Bound.
Jul  5 19:16:17.086: INFO: PersistentVolumeClaim pvc-7gptl found and phase=Bound (14.870620921s)
Jul  5 19:16:17.086: INFO: Waiting up to 3m0s for PersistentVolume local-fvksx to have phase Bound
Jul  5 19:16:17.194: INFO: PersistentVolume local-fvksx found and phase=Bound (108.101615ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-h68h
STEP: Creating a pod to test subpath
Jul  5 19:16:17.522: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-h68h" in namespace "provisioning-7607" to be "Succeeded or Failed"
Jul  5 19:16:17.632: INFO: Pod "pod-subpath-test-preprovisionedpv-h68h": Phase="Pending", Reason="", readiness=false. Elapsed: 109.604794ms
Jul  5 19:16:19.742: INFO: Pod "pod-subpath-test-preprovisionedpv-h68h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219215791s
Jul  5 19:16:21.853: INFO: Pod "pod-subpath-test-preprovisionedpv-h68h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.330442301s
STEP: Saw pod success
Jul  5 19:16:21.853: INFO: Pod "pod-subpath-test-preprovisionedpv-h68h" satisfied condition "Succeeded or Failed"
Jul  5 19:16:21.962: INFO: Trying to get logs from node ip-172-20-59-37.eu-central-1.compute.internal pod pod-subpath-test-preprovisionedpv-h68h container test-container-subpath-preprovisionedpv-h68h: <nil>
STEP: delete the pod
Jul  5 19:16:22.197: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-h68h to disappear
Jul  5 19:16:22.306: INFO: Pod pod-subpath-test-preprovisionedpv-h68h no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-h68h
Jul  5 19:16:22.306: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-h68h" in namespace "provisioning-7607"
... skipping 26 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":17,"skipped":200,"failed":5,"failures":["[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","[sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]}
Jul  5 19:16:26.155: INFO: Running AfterSuite actions on all nodes
Jul  5 19:16:26.155: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:16:26.155: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:16:26.155: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:16:26.155: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:16:26.155: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 36 lines ...
• [SLOW TEST:11.761 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":24,"skipped":176,"failed":4,"failures":["[sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup applied to the volume contents","[sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","[sig-network] Services should implement service.kubernetes.io/headless"]}
Jul  5 19:16:32.382: INFO: Running AfterSuite actions on all nodes
Jul  5 19:16:32.382: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:16:32.382: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:16:32.382: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:16:32.382: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:16:32.382: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 14 lines ...
Jul  5 19:11:37.195: INFO: Creating resource for dynamic PV
Jul  5 19:11:37.195: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass fsgroupchangepolicy-17864l725
STEP: creating a claim
Jul  5 19:11:37.305: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating Pod in namespace fsgroupchangepolicy-1786 with fsgroup 1000
Jul  5 19:16:37.965: FAIL: Unexpected error:
    <*errors.errorString | 0xc0034950a0>: {
        s: "pod \"pod-36df7ac8-eb02-4394-a184-5146013459ea\" is not Running: timed out waiting for the condition",
    }
    pod "pod-36df7ac8-eb02-4394-a184-5146013459ea" is not Running: timed out waiting for the condition
occurred

... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "fsgroupchangepolicy-1786".
STEP: Found 5 events.
Jul  5 19:16:38.405: INFO: At 2021-07-05 19:11:37 +0000 UTC - event for awsbmthr: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:16:38.405: INFO: At 2021-07-05 19:11:37 +0000 UTC - event for awsbmthr: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "fsgroupchangepolicy-1786/awsbmthr"
Jul  5 19:16:38.405: INFO: At 2021-07-05 19:11:37 +0000 UTC - event for awsbmthr: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:16:38.405: INFO: At 2021-07-05 19:11:47 +0000 UTC - event for awsbmthr: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-17864l725": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:16:38.405: INFO: At 2021-07-05 19:12:10 +0000 UTC - event for awsbmthr: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-17864l725": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:16:38.515: INFO: POD                                       NODE  PHASE    GRACE  CONDITIONS
Jul  5 19:16:38.515: INFO: pod-36df7ac8-eb02-4394-a184-5146013459ea        Pending         []
Jul  5 19:16:38.515: INFO: 
Jul  5 19:16:38.625: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 191 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Jul  5 19:16:37.965: Unexpected error:
          <*errors.errorString | 0xc0034950a0>: {
              s: "pod \"pod-36df7ac8-eb02-4394-a184-5146013459ea\" is not Running: timed out waiting for the condition",
          }
          pod "pod-36df7ac8-eb02-4394-a184-5146013459ea" is not Running: timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:250
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents","total":-1,"completed":22,"skipped":287,"failed":6,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","[sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents"]}
Jul  5 19:16:42.733: INFO: Running AfterSuite actions on all nodes
Jul  5 19:16:42.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:16:42.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:16:42.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:16:42.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:16:42.733: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 65 lines ...
Jul  5 19:15:44.339: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jul  5 19:15:44.449: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [csi-hostpath7b625] to have phase Bound
Jul  5 19:15:44.557: INFO: PersistentVolumeClaim csi-hostpath7b625 found but phase is Pending instead of Bound.
Jul  5 19:15:46.667: INFO: PersistentVolumeClaim csi-hostpath7b625 found and phase=Bound (2.217881751s)
STEP: Creating pod pod-subpath-test-dynamicpv-fvkr
STEP: Creating a pod to test subpath
Jul  5 19:15:46.997: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-fvkr" in namespace "provisioning-1518" to be "Succeeded or Failed"
Jul  5 19:15:47.107: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 109.49481ms
Jul  5 19:15:49.217: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21950291s
Jul  5 19:15:51.327: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329803085s
Jul  5 19:15:53.438: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440526578s
Jul  5 19:15:55.547: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549538921s
Jul  5 19:15:57.656: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.658925415s
Jul  5 19:15:59.766: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 12.768430088s
Jul  5 19:16:01.875: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.877944872s
Jul  5 19:16:03.985: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Pending", Reason="", readiness=false. Elapsed: 16.987619179s
Jul  5 19:16:06.095: INFO: Pod "pod-subpath-test-dynamicpv-fvkr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.097522104s
STEP: Saw pod success
Jul  5 19:16:06.095: INFO: Pod "pod-subpath-test-dynamicpv-fvkr" satisfied condition "Succeeded or Failed"
Jul  5 19:16:06.204: INFO: Trying to get logs from node ip-172-20-36-144.eu-central-1.compute.internal pod pod-subpath-test-dynamicpv-fvkr container test-container-subpath-dynamicpv-fvkr: <nil>
STEP: delete the pod
Jul  5 19:16:06.443: INFO: Waiting for pod pod-subpath-test-dynamicpv-fvkr to disappear
Jul  5 19:16:06.551: INFO: Pod pod-subpath-test-dynamicpv-fvkr no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-fvkr
Jul  5 19:16:06.551: INFO: Deleting pod "pod-subpath-test-dynamicpv-fvkr" in namespace "provisioning-1518"
... skipping 60 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support readOnly directory specified in the volumeMount
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:364
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":27,"skipped":178,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","[sig-storage] Mounted volume expand Should verify mounted devices can be resized","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with same fsgroup skips ownership changes to the volume contents"]}
Jul  5 19:16:46.022: INFO: Running AfterSuite actions on all nodes
Jul  5 19:16:46.022: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:16:46.022: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:16:46.022: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:16:46.022: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:16:46.022: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 26 lines ...
Jul  5 19:11:29.979: INFO: PersistentVolumeClaim pvc-kmmr4 found and phase=Bound (110.087612ms)
Jul  5 19:11:29.979: INFO: Waiting up to 3m0s for PersistentVolume nfs-z4jrh to have phase Bound
Jul  5 19:11:30.089: INFO: PersistentVolume nfs-z4jrh found and phase=Bound (110.073308ms)
STEP: Checking pod has write access to PersistentVolume
Jul  5 19:11:30.310: INFO: Creating nfs test pod
Jul  5 19:11:30.421: INFO: Pod should terminate with exitcode 0 (success)
Jul  5 19:11:30.421: INFO: Waiting up to 5m0s for pod "pvc-tester-54s9q" in namespace "pv-5435" to be "Succeeded or Failed"
Jul  5 19:11:30.531: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 110.083597ms
Jul  5 19:11:32.642: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221357698s
Jul  5 19:11:34.754: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332972691s
Jul  5 19:11:36.866: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44507316s
Jul  5 19:11:38.977: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.556558035s
Jul  5 19:11:41.088: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.667172916s
... skipping 133 lines ...
Jul  5 19:16:24.162: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.741227015s
Jul  5 19:16:26.273: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.851979847s
Jul  5 19:16:28.385: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.963757695s
Jul  5 19:16:30.496: INFO: Pod "pvc-tester-54s9q": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.074777012s
Jul  5 19:16:32.496: INFO: Deleting pod "pvc-tester-54s9q" in namespace "pv-5435"
Jul  5 19:16:32.609: INFO: Wait up to 5m0s for pod "pvc-tester-54s9q" to be fully deleted
Jul  5 19:16:40.830: FAIL: Unexpected error:
    <*errors.errorString | 0xc00387b670>: {
        s: "pod \"pvc-tester-54s9q\" did not exit with Success: pod \"pvc-tester-54s9q\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-54s9q\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-54s9q" did not exit with Success: pod "pvc-tester-54s9q" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-54s9q" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.completeTest(0xc00345b8c0, 0x78a18a8, 0xc0033691e0, 0xc003abafe9, 0x7, 0xc0032dc780, 0xc003cf2000)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52 +0x19c
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.3.2()
... skipping 22 lines ...
Jul  5 19:17:01.503: INFO: At 2021-07-05 19:11:22 +0000 UTC - event for nfs-server: {default-scheduler } Scheduled: Successfully assigned pv-5435/nfs-server to ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:17:01.503: INFO: At 2021-07-05 19:11:24 +0000 UTC - event for nfs-server: {kubelet ip-172-20-36-144.eu-central-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/volume/nfs:1.2" already present on machine
Jul  5 19:17:01.503: INFO: At 2021-07-05 19:11:24 +0000 UTC - event for nfs-server: {kubelet ip-172-20-36-144.eu-central-1.compute.internal} Created: Created container nfs-server
Jul  5 19:17:01.503: INFO: At 2021-07-05 19:11:24 +0000 UTC - event for nfs-server: {kubelet ip-172-20-36-144.eu-central-1.compute.internal} Started: Started container nfs-server
Jul  5 19:17:01.503: INFO: At 2021-07-05 19:11:30 +0000 UTC - event for pvc-tester-54s9q: {default-scheduler } Scheduled: Successfully assigned pv-5435/pvc-tester-54s9q to ip-172-20-60-158.eu-central-1.compute.internal
Jul  5 19:17:01.503: INFO: At 2021-07-05 19:13:33 +0000 UTC - event for pvc-tester-54s9q: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[kube-api-access-gq6cq volume1]: timed out waiting for the condition
Jul  5 19:17:01.503: INFO: At 2021-07-05 19:14:31 +0000 UTC - event for pvc-tester-54s9q: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-z4jrh" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.2.16:/exports /var/lib/kubelet/pods/abb37d3a-fbe0-4e3b-a2a7-bcf490e7aa69/volumes/kubernetes.io~nfs/nfs-z4jrh
Output: mount.nfs: Connection timed out

Jul  5 19:17:01.503: INFO: At 2021-07-05 19:16:41 +0000 UTC - event for nfs-server: {kubelet ip-172-20-36-144.eu-central-1.compute.internal} Killing: Stopping container nfs-server
Jul  5 19:17:01.613: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 182 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:155
      should create a non-pre-bound PV and PVC: test write access  [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:169

      Jul  5 19:16:40.830: Unexpected error:
          <*errors.errorString | 0xc00387b670>: {
              s: "pod \"pvc-tester-54s9q\" did not exit with Success: pod \"pvc-tester-54s9q\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-54s9q\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-54s9q" did not exit with Success: pod "pvc-tester-54s9q" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-54s9q" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:52
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access ","total":-1,"completed":40,"skipped":303,"failed":5,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-network] Services should implement service.kubernetes.io/service-proxy-name","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","[sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access "]}
Jul  5 19:17:05.915: INFO: Running AfterSuite actions on all nodes
Jul  5 19:17:05.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:17:05.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:17:05.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:17:05.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:17:05.916: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 22 lines ...
Jul  5 19:11:45.156: INFO: PersistentVolumeClaim pvc-mvrjd found but phase is Pending instead of Bound.
Jul  5 19:11:47.266: INFO: PersistentVolumeClaim pvc-mvrjd found and phase=Bound (2.219678741s)
Jul  5 19:11:47.266: INFO: Waiting up to 3m0s for PersistentVolume aws-4hnbk to have phase Bound
Jul  5 19:11:47.376: INFO: PersistentVolume aws-4hnbk found and phase=Bound (109.707983ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-mpwd
STEP: Creating a pod to test exec-volume-test
Jul  5 19:11:47.707: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-mpwd" in namespace "volume-3080" to be "Succeeded or Failed"
Jul  5 19:11:47.817: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 109.856841ms
Jul  5 19:11:49.928: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220241509s
Jul  5 19:11:52.039: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331634241s
Jul  5 19:11:54.150: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442915884s
Jul  5 19:11:56.262: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554338559s
Jul  5 19:11:58.373: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665403907s
... skipping 131 lines ...
Jul  5 19:16:37.021: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.314025653s
Jul  5 19:16:39.132: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.424797726s
Jul  5 19:16:41.244: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.536263001s
Jul  5 19:16:43.354: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.646984196s
Jul  5 19:16:45.466: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.758335133s
Jul  5 19:16:47.576: INFO: Pod "exec-volume-test-preprovisionedpv-mpwd": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.868973962s
Jul  5 19:16:49.799: INFO: Failed to get logs from node "ip-172-20-60-158.eu-central-1.compute.internal" pod "exec-volume-test-preprovisionedpv-mpwd" container "exec-container-preprovisionedpv-mpwd": the server rejected our request for an unknown reason (get pods exec-volume-test-preprovisionedpv-mpwd)
STEP: delete the pod
Jul  5 19:16:49.911: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mpwd to disappear
Jul  5 19:16:50.021: INFO: Pod exec-volume-test-preprovisionedpv-mpwd still exists
Jul  5 19:16:52.023: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mpwd to disappear
Jul  5 19:16:52.134: INFO: Pod exec-volume-test-preprovisionedpv-mpwd still exists
Jul  5 19:16:54.022: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mpwd to disappear
... skipping 3 lines ...
Jul  5 19:16:58.022: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mpwd to disappear
Jul  5 19:16:58.132: INFO: Pod exec-volume-test-preprovisionedpv-mpwd still exists
Jul  5 19:17:00.023: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mpwd to disappear
Jul  5 19:17:00.133: INFO: Pod exec-volume-test-preprovisionedpv-mpwd still exists
Jul  5 19:17:02.022: INFO: Waiting for pod exec-volume-test-preprovisionedpv-mpwd to disappear
Jul  5 19:17:02.132: INFO: Pod exec-volume-test-preprovisionedpv-mpwd no longer exists
Jul  5 19:17:02.132: FAIL: Unexpected error:
    <*errors.errorString | 0xc0051f9050>: {
        s: "expected pod \"exec-volume-test-preprovisionedpv-mpwd\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-preprovisionedpv-mpwd\" to be \"Succeeded or Failed\"",
    }
    expected pod "exec-volume-test-preprovisionedpv-mpwd" success: Gave up after waiting 5m0s for pod "exec-volume-test-preprovisionedpv-mpwd" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0020cec60, 0x6fd77e0, 0x10, 0xc00275f000, 0x0, 0xc000a750d8, 0x1, 0x1, 0x71d34c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 17 lines ...
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-3080".
STEP: Found 6 events.
Jul  5 19:17:03.116: INFO: At 2021-07-05 19:11:44 +0000 UTC - event for pvc-mvrjd: {persistentvolume-controller } ProvisioningFailed: storageclass.storage.k8s.io "volume-3080" not found
Jul  5 19:17:03.116: INFO: At 2021-07-05 19:11:47 +0000 UTC - event for exec-volume-test-preprovisionedpv-mpwd: {default-scheduler } Scheduled: Successfully assigned volume-3080/exec-volume-test-preprovisionedpv-mpwd to ip-172-20-60-158.eu-central-1.compute.internal
Jul  5 19:17:03.116: INFO: At 2021-07-05 19:12:03 +0000 UTC - event for exec-volume-test-preprovisionedpv-mpwd: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "aws-4hnbk" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:17:03.116: INFO: At 2021-07-05 19:12:21 +0000 UTC - event for exec-volume-test-preprovisionedpv-mpwd: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "aws-4hnbk" : rpc error: code = NotFound desc = Instance "i-0c9540d6fb78f4b7f" not found
Jul  5 19:17:03.116: INFO: At 2021-07-05 19:13:50 +0000 UTC - event for exec-volume-test-preprovisionedpv-mpwd: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[kube-api-access-j45jx vol1]: timed out waiting for the condition
Jul  5 19:17:03.116: INFO: At 2021-07-05 19:16:05 +0000 UTC - event for exec-volume-test-preprovisionedpv-mpwd: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[vol1 kube-api-access-j45jx]: timed out waiting for the condition
Jul  5 19:17:03.226: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:17:03.226: INFO: 
Jul  5 19:17:03.338: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 179 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Jul  5 19:17:02.132: Unexpected error:
          <*errors.errorString | 0xc0051f9050>: {
              s: "expected pod \"exec-volume-test-preprovisionedpv-mpwd\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-preprovisionedpv-mpwd\" to be \"Succeeded or Failed\"",
          }
          expected pod "exec-volume-test-preprovisionedpv-mpwd" success: Gave up after waiting 5m0s for pod "exec-volume-test-preprovisionedpv-mpwd" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":38,"skipped":292,"failed":5,"failures":["[sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume"]}
Jul  5 19:17:07.456: INFO: Running AfterSuite actions on all nodes
Jul  5 19:17:07.456: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:17:07.456: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:17:07.456: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:17:07.456: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:17:07.456: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  5 19:17:07.456: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  5 19:17:07.456: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":40,"skipped":381,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
[BeforeEach] [sig-storage] CSI mock volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:16:20.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename csi-mock-volumes
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 44 lines ...
Jul  5 19:16:28.414: INFO: Waiting up to timeout=5m0s for PersistentVolumeClaims [pvc-lk59p] to have phase Bound
Jul  5 19:16:28.523: INFO: PersistentVolumeClaim pvc-lk59p found and phase=Bound (109.234891ms)
STEP: Deleting the previously created pod
Jul  5 19:16:39.072: INFO: Deleting pod "pvc-volume-tester-kc4r9" in namespace "csi-mock-volumes-8777"
Jul  5 19:16:39.183: INFO: Wait up to 5m0s for pod "pvc-volume-tester-kc4r9" to be fully deleted
STEP: Checking CSI driver logs
Jul  5 19:16:51.514: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/80ab821d-747c-42b5-881a-58b60a237a8e/volumes/kubernetes.io~csi/pvc-a70dcccd-b8ba-47d1-8ef9-eded3d57f90b/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-kc4r9
Jul  5 19:16:51.514: INFO: Deleting pod "pvc-volume-tester-kc4r9" in namespace "csi-mock-volumes-8777"
STEP: Deleting claim pvc-lk59p
Jul  5 19:16:51.847: INFO: Waiting up to 2m0s for PersistentVolume pvc-a70dcccd-b8ba-47d1-8ef9-eded3d57f90b to get deleted
Jul  5 19:16:51.956: INFO: PersistentVolume pvc-a70dcccd-b8ba-47d1-8ef9-eded3d57f90b was removed
STEP: Deleting storageclass csi-mock-volumes-8777-sc67c6b
... skipping 44 lines ...
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:444
    should not be passed when podInfoOnMount=false
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_mock_volume.go:494
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":41,"skipped":381,"failed":6,"failures":["[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-network] Services should be able to up and down services","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","[sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]"]}
Jul  5 19:17:14.230: INFO: Running AfterSuite actions on all nodes
Jul  5 19:17:14.230: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:17:14.230: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:17:14.230: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:17:14.230: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:17:14.230: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 16 lines ...
Jul  5 19:12:07.172: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-10026sr79
STEP: creating a claim
Jul  5 19:12:07.282: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-fxrt
STEP: Creating a pod to test subpath
Jul  5 19:12:07.613: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-fxrt" in namespace "provisioning-1002" to be "Succeeded or Failed"
Jul  5 19:12:07.723: INFO: Pod "pod-subpath-test-dynamicpv-fxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 110.314067ms
Jul  5 19:12:09.833: INFO: Pod "pod-subpath-test-dynamicpv-fxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220217806s
Jul  5 19:12:11.942: INFO: Pod "pod-subpath-test-dynamicpv-fxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329324278s
Jul  5 19:12:14.054: INFO: Pod "pod-subpath-test-dynamicpv-fxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.440898987s
Jul  5 19:12:16.164: INFO: Pod "pod-subpath-test-dynamicpv-fxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551356795s
Jul  5 19:12:18.276: INFO: Pod "pod-subpath-test-dynamicpv-fxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.662554041s
... skipping 136 lines ...
Jul  5 19:17:07.412: INFO: Pod "pod-subpath-test-dynamicpv-fxrt": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.799166902s
Jul  5 19:17:09.630: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-fxrt" container "init-volume-dynamicpv-fxrt": 
Jul  5 19:17:09.739: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-fxrt" container "test-container-subpath-dynamicpv-fxrt": 
STEP: delete the pod
Jul  5 19:17:09.853: INFO: Waiting for pod pod-subpath-test-dynamicpv-fxrt to disappear
Jul  5 19:17:09.961: INFO: Pod pod-subpath-test-dynamicpv-fxrt no longer exists
Jul  5 19:17:09.962: FAIL: Unexpected error:
    <*errors.errorString | 0xc005c80a20>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-fxrt\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-fxrt\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-fxrt" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-fxrt" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc003265600, 0x6fb6221, 0x7, 0xc00628bc00, 0x0, 0xc0040e50d0, 0x1, 0x1, 0x71d34c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-1002".
STEP: Found 5 events.
Jul  5 19:17:10.513: INFO: At 2021-07-05 19:12:07 +0000 UTC - event for awsghntj: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:17:10.513: INFO: At 2021-07-05 19:12:07 +0000 UTC - event for awsghntj: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "provisioning-1002/awsghntj"
Jul  5 19:17:10.513: INFO: At 2021-07-05 19:12:07 +0000 UTC - event for awsghntj: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:17:10.513: INFO: At 2021-07-05 19:12:17 +0000 UTC - event for awsghntj: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-10026sr79": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:17:10.513: INFO: At 2021-07-05 19:13:12 +0000 UTC - event for awsghntj: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-10026sr79": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:17:10.622: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:17:10.622: INFO: 
Jul  5 19:17:10.732: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:17:10.841: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 45638 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.hostpath.csi/node:ip-172-20-36-144.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2021-07-05 19:15:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-05 19:15:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d,DevicePath:,},},Config:nil,},}
Jul  5 19:17:10.842: INFO: 
... skipping 171 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support existing single file [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:219

      Jul  5 19:17:09.962: Unexpected error:
          <*errors.errorString | 0xc005c80a20>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-fxrt\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-fxrt\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-dynamicpv-fxrt" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-fxrt" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]","total":-1,"completed":20,"skipped":213,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]"]}
Jul  5 19:17:14.778: INFO: Running AfterSuite actions on all nodes
Jul  5 19:17:14.779: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:17:14.779: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:17:14.779: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:17:14.779: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:17:14.779: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 122 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:40
    [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should create read-only inline ephemeral volume
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/ephemeral.go:149
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":23,"skipped":167,"failed":2,"failures":["[sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource"]}
Jul  5 19:17:23.722: INFO: Running AfterSuite actions on all nodes
Jul  5 19:17:23.722: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:17:23.722: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:17:23.722: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:17:23.722: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:17:23.722: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 29 lines ...
Jul  5 19:12:20.442: INFO: PersistentVolume nfs-4268r found and phase=Bound (108.456877ms)
Jul  5 19:12:20.551: INFO: Waiting up to timeout=3m0s for PersistentVolumeClaims [pvc-b6hvf] to have phase Bound
Jul  5 19:12:20.660: INFO: PersistentVolumeClaim pvc-b6hvf found and phase=Bound (108.753798ms)
STEP: Checking pod has write access to PersistentVolumes
Jul  5 19:12:20.769: INFO: Creating nfs test pod
Jul  5 19:12:20.879: INFO: Pod should terminate with exitcode 0 (success)
Jul  5 19:12:20.879: INFO: Waiting up to 5m0s for pod "pvc-tester-mf7qk" in namespace "pv-5614" to be "Succeeded or Failed"
Jul  5 19:12:20.987: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 108.270386ms
Jul  5 19:12:23.097: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218272501s
Jul  5 19:12:25.206: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327147368s
Jul  5 19:12:27.316: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437160103s
Jul  5 19:12:29.425: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546523934s
Jul  5 19:12:31.535: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656197061s
... skipping 133 lines ...
Jul  5 19:17:14.291: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.411893516s
Jul  5 19:17:16.400: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.521631567s
Jul  5 19:17:18.510: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.631111845s
Jul  5 19:17:20.620: INFO: Pod "pvc-tester-mf7qk": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.741071321s
Jul  5 19:17:22.621: INFO: Deleting pod "pvc-tester-mf7qk" in namespace "pv-5614"
Jul  5 19:17:22.731: INFO: Wait up to 5m0s for pod "pvc-tester-mf7qk" to be fully deleted
Jul  5 19:17:30.950: FAIL: Unexpected error:
    <*errors.errorString | 0xc002c7aa10>: {
        s: "pod \"pvc-tester-mf7qk\" did not exit with Success: pod \"pvc-tester-mf7qk\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-mf7qk\" to be \"Succeeded or Failed\"",
    }
    pod "pvc-tester-mf7qk" did not exit with Success: pod "pvc-tester-mf7qk" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-mf7qk" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage.glob..func22.2.4.2()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238 +0x371
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004d4180)
... skipping 27 lines ...
Jul  5 19:17:42.046: INFO: At 2021-07-05 19:12:17 +0000 UTC - event for nfs-server: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} Created: Created container nfs-server
Jul  5 19:17:42.046: INFO: At 2021-07-05 19:12:17 +0000 UTC - event for nfs-server: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} Started: Started container nfs-server
Jul  5 19:17:42.046: INFO: At 2021-07-05 19:12:19 +0000 UTC - event for pvc-nrzwn: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Jul  5 19:17:42.046: INFO: At 2021-07-05 19:12:19 +0000 UTC - event for pvc-qdh6h: {persistentvolume-controller } FailedBinding: no persistent volumes available for this claim and no storage class is set
Jul  5 19:17:42.046: INFO: At 2021-07-05 19:12:20 +0000 UTC - event for pvc-tester-mf7qk: {default-scheduler } Scheduled: Successfully assigned pv-5614/pvc-tester-mf7qk to ip-172-20-59-37.eu-central-1.compute.internal
Jul  5 19:17:42.046: INFO: At 2021-07-05 19:14:23 +0000 UTC - event for pvc-tester-mf7qk: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[volume1], unattached volumes=[volume1 kube-api-access-gklbn]: timed out waiting for the condition
Jul  5 19:17:42.046: INFO: At 2021-07-05 19:15:21 +0000 UTC - event for pvc-tester-mf7qk: {kubelet ip-172-20-59-37.eu-central-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "nfs-4268r" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 100.96.3.46:/exports /var/lib/kubelet/pods/302e3a0e-058d-41b2-b09b-c30a41d177bf/volumes/kubernetes.io~nfs/nfs-4268r
Output: mount.nfs: Connection timed out

Jul  5 19:17:42.046: INFO: At 2021-07-05 19:17:31 +0000 UTC - event for nfs-server: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} Killing: Stopping container nfs-server
Jul  5 19:17:42.154: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
... skipping 164 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:122
    with multiple PVs and PVCs all in same ns
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:212
      should create 2 PVs and 4 PVCs: test write access [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:233

      Jul  5 19:17:30.950: Unexpected error:
          <*errors.errorString | 0xc002c7aa10>: {
              s: "pod \"pvc-tester-mf7qk\" did not exit with Success: pod \"pvc-tester-mf7qk\" failed to reach Success: Gave up after waiting 5m0s for pod \"pvc-tester-mf7qk\" to be \"Succeeded or Failed\"",
          }
          pod "pvc-tester-mf7qk" did not exit with Success: pod "pvc-tester-mf7qk" failed to reach Success: Gave up after waiting 5m0s for pod "pvc-tester-mf7qk" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes.go:238
------------------------------
{"msg":"FAILED [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access","total":-1,"completed":37,"skipped":313,"failed":4,"failures":["[sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","[sig-network] Conntrack should drop INVALID conntrack entries","[sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access"]}
Jul  5 19:17:46.321: INFO: Running AfterSuite actions on all nodes
Jul  5 19:17:46.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:17:46.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:17:46.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:17:46.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:17:46.321: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 12 lines ...
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating server pod server in namespace prestop-8611
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8611
STEP: Deleting pre-stop pod
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
STEP: Error validating prestop: the server is currently unable to handle the request (get pods server)
Jul  5 19:18:00.179: FAIL: validating pre-stop.
Unexpected error:
    <*errors.errorString | 0xc0002b8240>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 188 lines ...
[sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance] [It]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Jul  5 19:18:00.179: validating pre-stop.
  Unexpected error:
      <*errors.errorString | 0xc0002b8240>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151
------------------------------
{"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":15,"skipped":184,"failed":5,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","[sig-node] PreStop should call prestop when killing a pod  [Conformance]"]}
Jul  5 19:18:04.640: INFO: Running AfterSuite actions on all nodes
Jul  5 19:18:04.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:18:04.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:18:04.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:18:04.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:18:04.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  5 19:18:04.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  5 19:18:04.640: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3


{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":26,"skipped":195,"failed":3,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Services should serve multiport endpoints from pods  [Conformance]"]}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Jul  5 19:13:18.611: INFO: >>> kubeConfig: /root/.kube/config
... skipping 4 lines ...
Jul  5 19:13:19.158: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
Jul  5 19:13:19.158: INFO: Creating resource for dynamic PV
Jul  5 19:13:19.158: INFO: Using claimSize:1Gi, test suite supported size:{ 1Gi}, driver(aws) supported size:{ 1Gi} 
STEP: creating a StorageClass volume-expand-42736p6gv
STEP: creating a claim
STEP: Creating a pod with dynamically provisioned volume
Jul  5 19:18:19.925: FAIL: While creating pods for resizing
Unexpected error:
    <*errors.errorString | 0xc004a20720>: {
        s: "pod \"pod-9f76b331-0a65-4d46-9fd2-4d61752ca6d1\" is not Running: timed out waiting for the condition",
    }
    pod "pod-9f76b331-0a65-4d46-9fd2-4d61752ca6d1" is not Running: timed out waiting for the condition
occurred

... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-expand-4273".
STEP: Found 6 events.
Jul  5 19:18:20.696: INFO: At 2021-07-05 19:13:19 +0000 UTC - event for awsx8jcv: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:18:20.696: INFO: At 2021-07-05 19:13:19 +0000 UTC - event for awsx8jcv: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "volume-expand-4273/awsx8jcv"
Jul  5 19:18:20.696: INFO: At 2021-07-05 19:13:19 +0000 UTC - event for awsx8jcv: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:18:20.696: INFO: At 2021-07-05 19:13:29 +0000 UTC - event for awsx8jcv: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-42736p6gv": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:18:20.696: INFO: At 2021-07-05 19:13:40 +0000 UTC - event for awsx8jcv: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "volume-expand-42736p6gv": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:18:20.696: INFO: At 2021-07-05 19:18:20 +0000 UTC - event for pod-9f76b331-0a65-4d46-9fd2-4d61752ca6d1: {default-scheduler } FailedScheduling: running PreBind plugin "VolumeBinding": binding volumes: pod does not exist any more: pod "pod-9f76b331-0a65-4d46-9fd2-4d61752ca6d1" not found
Jul  5 19:18:20.805: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:18:20.805: INFO: 
Jul  5 19:18:20.916: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:18:21.025: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 46437 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.hostpath.csi/node:ip-172-20-36-144.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2021-07-05 19:15:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-05 19:15:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04eb7308128e98652,DevicePath:,},},Config:nil,},}
... skipping 159 lines ...
    [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should resize volume when PVC is edited while pod is using it [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:246

      Jul  5 19:18:19.925: While creating pods for resizing
      Unexpected error:
          <*errors.errorString | 0xc004a20720>: {
              s: "pod \"pod-9f76b331-0a65-4d46-9fd2-4d61752ca6d1\" is not Running: timed out waiting for the condition",
          }
          pod "pod-9f76b331-0a65-4d46-9fd2-4d61752ca6d1" is not Running: timed out waiting for the condition
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:268
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it","total":-1,"completed":26,"skipped":195,"failed":4,"failures":["[sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","[sig-network] Services should serve multiport endpoints from pods  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it"]}
Jul  5 19:18:24.941: INFO: Running AfterSuite actions on all nodes
Jul  5 19:18:24.941: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:18:24.941: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:18:24.941: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:18:24.941: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:18:24.941: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 22 lines ...
Jul  5 19:18:36.948: INFO: Waiting for pod aws-injector to disappear
Jul  5 19:18:37.058: INFO: Pod aws-injector still exists
Jul  5 19:18:38.948: INFO: Waiting for pod aws-injector to disappear
Jul  5 19:18:39.058: INFO: Pod aws-injector still exists
Jul  5 19:18:40.948: INFO: Waiting for pod aws-injector to disappear
Jul  5 19:18:41.057: INFO: Pod aws-injector no longer exists
Jul  5 19:18:41.058: FAIL: Failed to create injector pod: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186 +0x3ff
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000103500)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 7 lines ...
Jul  5 19:18:41.669: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0bb7916c1c73ea6e1".
[AfterEach] [Testpattern: Inline-volume (ext4)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-4962".
STEP: Found 4 events.
Jul  5 19:18:41.778: INFO: At 2021-07-05 19:13:34 +0000 UTC - event for aws-injector: {default-scheduler } Scheduled: Successfully assigned volume-4962/aws-injector to ip-172-20-60-158.eu-central-1.compute.internal
Jul  5 19:18:41.778: INFO: At 2021-07-05 19:13:49 +0000 UTC - event for aws-injector: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "ebs.csi.aws.com-vol-0bb7916c1c73ea6e1" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:18:41.778: INFO: At 2021-07-05 19:14:08 +0000 UTC - event for aws-injector: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "ebs.csi.aws.com-vol-0bb7916c1c73ea6e1" : rpc error: code = NotFound desc = Instance "i-0c9540d6fb78f4b7f" not found
Jul  5 19:18:41.778: INFO: At 2021-07-05 19:15:37 +0000 UTC - event for aws-injector: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[aws-volume-0], unattached volumes=[aws-volume-0 kube-api-access-4x5f9]: timed out waiting for the condition
Jul  5 19:18:41.887: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:18:41.887: INFO: 
Jul  5 19:18:41.999: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:18:42.108: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 46437 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.hostpath.csi/node:ip-172-20-36-144.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2021-07-05 19:15:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-05 19:15:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04eb7308128e98652,DevicePath:,},},Config:nil,},}
... skipping 156 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (ext4)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Jul  5 19:18:41.059: Failed to create injector pod: timed out waiting for the condition

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data","total":-1,"completed":8,"skipped":34,"failed":1,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data"]}
Jul  5 19:18:46.018: INFO: Running AfterSuite actions on all nodes
Jul  5 19:18:46.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:18:46.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:18:46.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:18:46.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:18:46.018: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 16 lines ...
Jul  5 19:14:00.376: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-3175sg6c2
STEP: creating a claim
Jul  5 19:14:00.486: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-gx5n
STEP: Creating a pod to test subpath
Jul  5 19:14:00.817: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-gx5n" in namespace "provisioning-3175" to be "Succeeded or Failed"
Jul  5 19:14:00.926: INFO: Pod "pod-subpath-test-dynamicpv-gx5n": Phase="Pending", Reason="", readiness=false. Elapsed: 109.322723ms
Jul  5 19:14:03.036: INFO: Pod "pod-subpath-test-dynamicpv-gx5n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218972191s
Jul  5 19:14:05.146: INFO: Pod "pod-subpath-test-dynamicpv-gx5n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329215346s
Jul  5 19:14:07.256: INFO: Pod "pod-subpath-test-dynamicpv-gx5n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.439459567s
Jul  5 19:14:09.366: INFO: Pod "pod-subpath-test-dynamicpv-gx5n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.549060771s
Jul  5 19:14:11.476: INFO: Pod "pod-subpath-test-dynamicpv-gx5n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.65903485s
... skipping 137 lines ...
Jul  5 19:19:02.829: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-gx5n" container "test-init-subpath-dynamicpv-gx5n": 
Jul  5 19:19:02.938: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-gx5n" container "test-container-subpath-dynamicpv-gx5n": 
Jul  5 19:19:03.047: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-gx5n" container "test-container-volume-dynamicpv-gx5n": 
STEP: delete the pod
Jul  5 19:19:03.161: INFO: Waiting for pod pod-subpath-test-dynamicpv-gx5n to disappear
Jul  5 19:19:03.270: INFO: Pod pod-subpath-test-dynamicpv-gx5n no longer exists
Jul  5 19:19:03.270: FAIL: Unexpected error:
    <*errors.errorString | 0xc004131670>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-gx5n\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-gx5n\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-gx5n" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-gx5n" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00257cb00, 0x6fb6221, 0x7, 0xc003c85c00, 0x1, 0xc002dc1108, 0x1, 0x1, 0x71d34c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 19 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-3175".
STEP: Found 5 events.
Jul  5 19:19:03.821: INFO: At 2021-07-05 19:14:00 +0000 UTC - event for awsnrk8r: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:19:03.821: INFO: At 2021-07-05 19:14:00 +0000 UTC - event for awsnrk8r: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "provisioning-3175/awsnrk8r"
Jul  5 19:19:03.821: INFO: At 2021-07-05 19:14:00 +0000 UTC - event for awsnrk8r: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:19:03.821: INFO: At 2021-07-05 19:14:10 +0000 UTC - event for awsnrk8r: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-3175sg6c2": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:19:03.821: INFO: At 2021-07-05 19:14:21 +0000 UTC - event for awsnrk8r: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-3175sg6c2": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:19:03.931: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:19:03.931: INFO: 
Jul  5 19:19:04.041: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:19:04.151: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 46437 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.hostpath.csi/node:ip-172-20-36-144.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2021-07-05 19:15:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-05 19:15:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04eb7308128e98652,DevicePath:,},},Config:nil,},}
Jul  5 19:19:04.151: INFO: 
... skipping 155 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support non-existent path [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:194

      Jul  5 19:19:03.270: Unexpected error:
          <*errors.errorString | 0xc004131670>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-gx5n\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-gx5n\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-dynamicpv-gx5n" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-gx5n" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":32,"skipped":309,"failed":5,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path"]}
Jul  5 19:19:08.089: INFO: Running AfterSuite actions on all nodes
Jul  5 19:19:08.089: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:19:08.089: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:19:08.089: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:19:08.089: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:19:08.089: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 14 lines ...
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0705 19:14:09.642738   12471 metrics_grabber.go:115] Can't find snapshot-controller pod. Grabbing metrics from snapshot-controller is disabled.
W0705 19:14:09.642808   12471 metrics_grabber.go:118] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  5 19:19:09.861: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul  5 19:19:09.861: INFO: Deleting pod "simpletest-rc-to-be-deleted-7bqnh" in namespace "gc-9958"
Jul  5 19:19:09.974: INFO: Deleting pod "simpletest-rc-to-be-deleted-bl6gn" in namespace "gc-9958"
Jul  5 19:19:10.088: INFO: Deleting pod "simpletest-rc-to-be-deleted-dmzn5" in namespace "gc-9958"
Jul  5 19:19:10.210: INFO: Deleting pod "simpletest-rc-to-be-deleted-hmrdz" in namespace "gc-9958"
Jul  5 19:19:10.323: INFO: Deleting pod "simpletest-rc-to-be-deleted-jnrml" in namespace "gc-9958"
[AfterEach] [sig-api-machinery] Garbage collector
... skipping 5 lines ...
• [SLOW TEST:313.117 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":28,"skipped":295,"failed":3,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data","[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails"]}
Jul  5 19:19:10.667: INFO: Running AfterSuite actions on all nodes
Jul  5 19:19:10.667: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:19:10.667: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:19:10.667: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:19:10.667: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:19:10.667: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 28 lines ...
Jul  5 19:20:06.665: INFO: Waiting for pod aws-injector to disappear
Jul  5 19:20:06.775: INFO: Pod aws-injector still exists
Jul  5 19:20:08.665: INFO: Waiting for pod aws-injector to disappear
Jul  5 19:20:08.774: INFO: Pod aws-injector still exists
Jul  5 19:20:10.665: INFO: Waiting for pod aws-injector to disappear
Jul  5 19:20:10.774: INFO: Pod aws-injector no longer exists
Jul  5 19:20:10.775: FAIL: Failed to create injector pod: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3()
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186 +0x3ff
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00087c900)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:131 +0x36c
... skipping 7 lines ...
Jul  5 19:20:11.390: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0f2d95d5edeaf4c9d".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-4009".
STEP: Found 5 events.
Jul  5 19:20:11.500: INFO: At 2021-07-05 19:14:58 +0000 UTC - event for aws-injector: {default-scheduler } Scheduled: Successfully assigned volume-4009/aws-injector to ip-172-20-60-158.eu-central-1.compute.internal
Jul  5 19:20:11.500: INFO: At 2021-07-05 19:15:13 +0000 UTC - event for aws-injector: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "ebs.csi.aws.com-vol-0f2d95d5edeaf4c9d" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:20:11.500: INFO: At 2021-07-05 19:15:32 +0000 UTC - event for aws-injector: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "ebs.csi.aws.com-vol-0f2d95d5edeaf4c9d" : rpc error: code = NotFound desc = Instance "i-0c9540d6fb78f4b7f" not found
Jul  5 19:20:11.500: INFO: At 2021-07-05 19:17:01 +0000 UTC - event for aws-injector: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[aws-volume-0], unattached volumes=[kube-api-access-2nvzw aws-volume-0]: timed out waiting for the condition
Jul  5 19:20:11.500: INFO: At 2021-07-05 19:19:15 +0000 UTC - event for aws-injector: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[aws-volume-0], unattached volumes=[aws-volume-0 kube-api-access-2nvzw]: timed out waiting for the condition
Jul  5 19:20:11.609: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:20:11.609: INFO: 
Jul  5 19:20:11.719: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 145 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should store data [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159

      Jul  5 19:20:10.775: Failed to create injector pod: timed out waiting for the condition

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:186
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":20,"skipped":189,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","[sig-network] Services should serve a basic endpoint from pods  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data"]}
Jul  5 19:20:15.775: INFO: Running AfterSuite actions on all nodes
Jul  5 19:20:15.775: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:20:15.775: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:20:15.775: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:20:15.775: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:20:15.775: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 14 lines ...
Jul  5 19:15:16.495: INFO: Creating resource for dynamic PV
Jul  5 19:15:16.495: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass fsgroupchangepolicy-7828njj9n
STEP: creating a claim
Jul  5 19:15:16.605: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating Pod in namespace fsgroupchangepolicy-7828 with fsgroup 1000
Jul  5 19:20:17.265: FAIL: Unexpected error:
    <*errors.errorString | 0xc002b653a0>: {
        s: "pod \"pod-5475d857-9370-4548-a0c5-53cecee6edcd\" is not Running: timed out waiting for the condition",
    }
    pod "pod-5475d857-9370-4548-a0c5-53cecee6edcd" is not Running: timed out waiting for the condition
occurred

... skipping 17 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "fsgroupchangepolicy-7828".
STEP: Found 5 events.
Jul  5 19:20:17.706: INFO: At 2021-07-05 19:15:16 +0000 UTC - event for awsb9ss6: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:20:17.706: INFO: At 2021-07-05 19:15:16 +0000 UTC - event for awsb9ss6: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "fsgroupchangepolicy-7828/awsb9ss6"
Jul  5 19:20:17.706: INFO: At 2021-07-05 19:15:16 +0000 UTC - event for awsb9ss6: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:20:17.706: INFO: At 2021-07-05 19:15:26 +0000 UTC - event for awsb9ss6: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-7828njj9n": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:20:17.706: INFO: At 2021-07-05 19:16:03 +0000 UTC - event for awsb9ss6: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "fsgroupchangepolicy-7828njj9n": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:20:17.816: INFO: POD                                       NODE  PHASE    GRACE  CONDITIONS
Jul  5 19:20:17.816: INFO: pod-5475d857-9370-4548-a0c5-53cecee6edcd        Pending         []
Jul  5 19:20:17.816: INFO: 
Jul  5 19:20:17.926: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:20:18.036: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 46872 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.hostpath.csi/node:ip-172-20-36-144.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2021-07-05 19:15:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-05 19:15:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d,DevicePath:,},},Config:nil,},}
... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:208

      Jul  5 19:20:17.265: Unexpected error:
          <*errors.errorString | 0xc002b653a0>: {
              s: "pod \"pod-5475d857-9370-4548-a0c5-53cecee6edcd\" is not Running: timed out waiting for the condition",
          }
          pod "pod-5475d857-9370-4548-a0c5-53cecee6edcd" is not Running: timed out waiting for the condition
      occurred

... skipping 14 lines ...
Jul  5 19:15:23.426: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(aws) supported size:{ 1Mi} 
STEP: creating a StorageClass provisioning-9038h6bjj
STEP: creating a claim
Jul  5 19:15:23.536: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating pod pod-subpath-test-dynamicpv-hfxp
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 19:15:23.872: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-hfxp" in namespace "provisioning-9038" to be "Succeeded or Failed"
Jul  5 19:15:23.982: INFO: Pod "pod-subpath-test-dynamicpv-hfxp": Phase="Pending", Reason="", readiness=false. Elapsed: 109.770598ms
Jul  5 19:15:26.092: INFO: Pod "pod-subpath-test-dynamicpv-hfxp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220085874s
Jul  5 19:15:28.202: INFO: Pod "pod-subpath-test-dynamicpv-hfxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330149735s
Jul  5 19:15:30.313: INFO: Pod "pod-subpath-test-dynamicpv-hfxp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441252334s
Jul  5 19:15:32.423: INFO: Pod "pod-subpath-test-dynamicpv-hfxp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.551616656s
Jul  5 19:15:34.534: INFO: Pod "pod-subpath-test-dynamicpv-hfxp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661993427s
... skipping 136 lines ...
Jul  5 19:20:23.703: INFO: Pod "pod-subpath-test-dynamicpv-hfxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.83084094s
Jul  5 19:20:25.923: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-hfxp" container "init-volume-dynamicpv-hfxp": 
Jul  5 19:20:26.032: INFO: Output of node "" pod "pod-subpath-test-dynamicpv-hfxp" container "test-container-subpath-dynamicpv-hfxp": 
STEP: delete the pod
Jul  5 19:20:26.148: INFO: Waiting for pod pod-subpath-test-dynamicpv-hfxp to disappear
Jul  5 19:20:26.257: INFO: Pod pod-subpath-test-dynamicpv-hfxp no longer exists
Jul  5 19:20:26.257: FAIL: Unexpected error:
    <*errors.errorString | 0xc004b84cd0>: {
        s: "expected pod \"pod-subpath-test-dynamicpv-hfxp\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-hfxp\" to be \"Succeeded or Failed\"",
    }
    expected pod "pod-subpath-test-dynamicpv-hfxp" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-hfxp" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc002a60580, 0x6ff34dc, 0x15, 0xc003f0a400, 0x0, 0xc00171b0d0, 0x1, 0x1, 0x71d34c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 21 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "provisioning-9038".
STEP: Found 5 events.
Jul  5 19:20:26.810: INFO: At 2021-07-05 19:15:23 +0000 UTC - event for aws2hhbb: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:20:26.810: INFO: At 2021-07-05 19:15:23 +0000 UTC - event for aws2hhbb: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "provisioning-9038/aws2hhbb"
Jul  5 19:20:26.810: INFO: At 2021-07-05 19:15:23 +0000 UTC - event for aws2hhbb: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:20:26.810: INFO: At 2021-07-05 19:15:33 +0000 UTC - event for aws2hhbb: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-9038h6bjj": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:20:26.810: INFO: At 2021-07-05 19:15:56 +0000 UTC - event for aws2hhbb: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "provisioning-9038h6bjj": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:20:26.919: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:20:26.919: INFO: 
Jul  5 19:20:27.030: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
Jul  5 19:20:27.140: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-36-144.eu-central-1.compute.internal    7225e77f-b271-427b-a08c-8b3daacad6fd 46872 0 2021-07-05 18:40:00 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eu-central-1 failure-domain.beta.kubernetes.io/zone:eu-central-1a kops.k8s.io/instancegroup:nodes-eu-central-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-36-144.eu-central-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:eu-central-1a topology.hostpath.csi/node:ip-172-20-36-144.eu-central-1.compute.internal topology.kubernetes.io/region:eu-central-1 topology.kubernetes.io/zone:eu-central-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-067925adb76b0251a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:beta.kubernetes.io/instance-type":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {aws-cloud-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:status":{"f:addresses":{"k:{\"type\":\"ExternalDNS\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"ExternalIP\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalDNS\"}":{".":{},"f:address":{},"f:type":{}}}}} status} {kops-controller Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2021-07-05 18:40:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {kube-controller-manager Update v1 2021-07-05 19:15:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2021-07-05 19:15:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///eu-central-1a/i-067925adb76b0251a,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49895047168 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4063887360 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44905542377 0} {<nil>} 44905542377 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3959029760 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-07-05 19:15:55 +0000 UTC,LastTransitionTime:2021-07-05 18:40:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.36.144,},NodeAddress{Type:ExternalIP,Address:18.192.124.200,},NodeAddress{Type:InternalDNS,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:Hostname,Address:ip-172-20-36-144.eu-central-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-192-124-200.eu-central-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec26a0e9b6686625a154e53f3338c245,SystemUUID:ec26a0e9-b668-6625-a154-e53f3338c245,BootID:71bca41e-06e9-4785-8601-cc91d7f94f33,KernelVersion:5.8.0-1038-aws,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.22.0-beta.0,KubeProxyVersion:v1.22.0-beta.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.22.0-beta.0],SizeBytes:133254861,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:124a375b4f930627c65b2f84c0d0f09229a96bc527eec18ad0eeac150b96d1c2 k8s.gcr.io/e2e-test-images/volume/nfs:1.2],SizeBytes:95843946,},ContainerImage{Names:[k8s.gcr.io/provider-aws/aws-ebs-csi-driver@sha256:e57f880fa9134e67ae8d3262866637580b8fe6da1d1faec188ac0ad4d1ac2381 k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.0],SizeBytes:67082369,},ContainerImage{Names:[docker.io/library/nginx@sha256:47ae43cdfc7064d28800bc42e79a429540c7c80168e8c8952778c0d5af1c09db docker.io/library/nginx:latest],SizeBytes:53740695,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:48da0e4ed7238ad461ea05f68c25921783c37b315f21a5c5a2780157a6460994 k8s.gcr.io/sig-storage/livenessprobe:v2.2.0],SizeBytes:8279778,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1518^664c588d-ddc5-11eb-b7d8-42987177344d,DevicePath:,},},Config:nil,},}
... skipping 144 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Dynamic PV (default fs)] subPath
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should support file as subpath [LinuxOnly] [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:230

      Jul  5 19:20:26.257: Unexpected error:
          <*errors.errorString | 0xc004b84cd0>: {
              s: "expected pod \"pod-subpath-test-dynamicpv-hfxp\" success: Gave up after waiting 5m0s for pod \"pod-subpath-test-dynamicpv-hfxp\" to be \"Succeeded or Failed\"",
          }
          expected pod "pod-subpath-test-dynamicpv-hfxp" success: Gave up after waiting 5m0s for pod "pod-subpath-test-dynamicpv-hfxp" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":38,"skipped":333,"failed":4,"failures":["[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed in first pod, new pod with different fsgroup applied to the volume contents","[sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","[sig-network] DNS should support configurable pod resolv.conf","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}
Jul  5 19:20:31.135: INFO: Running AfterSuite actions on all nodes
Jul  5 19:20:31.135: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:20:31.135: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:20:31.135: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:20:31.135: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:20:31.135: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 11 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:72
Jul  5 19:15:41.289: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
STEP: Creating a PVC
Jul  5 19:15:41.509: INFO: Default storage class: "kops-csi-1-21"
Jul  5 19:15:41.509: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
STEP: Creating a Pod that becomes Running and therefore is actively using the PVC
Jul  5 19:20:42.062: FAIL: While creating pod that uses the PVC or waiting for the Pod to become Running
Unexpected error:
    <*errors.errorString | 0xc0032bf0a0>: {
        s: "pod \"pvc-tester-92g58\" is not Running: timed out waiting for the condition",
    }
    pod "pvc-tester-92g58" is not Running: timed out waiting for the condition
occurred

... skipping 12 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "pvc-protection-4521".
STEP: Found 5 events.
Jul  5 19:20:42.173: INFO: At 2021-07-05 19:15:41 +0000 UTC - event for pvc-protectiond9946: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding
Jul  5 19:20:42.173: INFO: At 2021-07-05 19:15:41 +0000 UTC - event for pvc-protectiond9946: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } Provisioning: External provisioner is provisioning volume for claim "pvc-protection-4521/pvc-protectiond9946"
Jul  5 19:20:42.173: INFO: At 2021-07-05 19:15:41 +0000 UTC - event for pvc-protectiond9946: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator
Jul  5 19:20:42.173: INFO: At 2021-07-05 19:15:51 +0000 UTC - event for pvc-protectiond9946: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "kops-csi-1-21": rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:20:42.173: INFO: At 2021-07-05 19:16:02 +0000 UTC - event for pvc-protectiond9946: {ebs.csi.aws.com_ebs-csi-controller-566c97f85c-f6cbs_682dcd34-7f43-4a16-8f50-24fe74c8d1b9 } ProvisioningFailed: failed to provision volume with StorageClass "kops-csi-1-21": rpc error: code = Internal desc = RequestCanceled: request context canceled
caused by: context deadline exceeded
Jul  5 19:20:42.284: INFO: POD               NODE  PHASE    GRACE  CONDITIONS
Jul  5 19:20:42.284: INFO: pvc-tester-92g58        Pending         []
Jul  5 19:20:42.284: INFO: 
Jul  5 19:20:42.395: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 145 lines ...
[sig-storage] PVC Protection
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable [BeforeEach]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:145

  Jul  5 19:20:42.062: While creating pod that uses the PVC or waiting for the Pod to become Running
  Unexpected error:
      <*errors.errorString | 0xc0032bf0a0>: {
          s: "pod \"pvc-tester-92g58\" is not Running: timed out waiting for the condition",
      }
      pod "pvc-tester-92g58" is not Running: timed out waiting for the condition
  occurred

  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/pvc_protection.go:96
------------------------------
{"msg":"FAILED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":30,"skipped":299,"failed":8,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume","[sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable"]}
Jul  5 19:20:46.552: INFO: Running AfterSuite actions on all nodes
Jul  5 19:20:46.552: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:20:46.552: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:20:46.552: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:20:46.552: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:20:46.552: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
... skipping 14 lines ...
Jul  5 19:15:52.314: INFO: In-tree plugin kubernetes.io/aws-ebs is not migrated, not validating any metrics
STEP: creating a test aws volume
Jul  5 19:15:53.091: INFO: Successfully created a new PD: "aws://eu-central-1a/vol-0ff48e4f216c588b9".
Jul  5 19:15:53.091: INFO: Creating resource for inline volume
STEP: Creating pod exec-volume-test-inlinevolume-7h7q
STEP: Creating a pod to test exec-volume-test
Jul  5 19:15:53.204: INFO: Waiting up to 5m0s for pod "exec-volume-test-inlinevolume-7h7q" in namespace "volume-8987" to be "Succeeded or Failed"
Jul  5 19:15:53.316: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 111.729924ms
Jul  5 19:15:55.426: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22207017s
Jul  5 19:15:57.537: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332830959s
Jul  5 19:15:59.647: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.442686026s
Jul  5 19:16:01.757: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553256759s
Jul  5 19:16:03.868: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663440725s
... skipping 131 lines ...
Jul  5 19:20:42.445: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.240429479s
Jul  5 19:20:44.554: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.350352703s
Jul  5 19:20:46.665: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.460648861s
Jul  5 19:20:48.774: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.570274047s
Jul  5 19:20:50.887: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.682714558s
Jul  5 19:20:52.997: INFO: Pod "exec-volume-test-inlinevolume-7h7q": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.793045338s
Jul  5 19:20:55.218: INFO: Failed to get logs from node "ip-172-20-60-158.eu-central-1.compute.internal" pod "exec-volume-test-inlinevolume-7h7q" container "exec-container-inlinevolume-7h7q": the server rejected our request for an unknown reason (get pods exec-volume-test-inlinevolume-7h7q)
STEP: delete the pod
Jul  5 19:20:55.331: INFO: Waiting for pod exec-volume-test-inlinevolume-7h7q to disappear
Jul  5 19:20:55.441: INFO: Pod exec-volume-test-inlinevolume-7h7q still exists
Jul  5 19:20:57.441: INFO: Waiting for pod exec-volume-test-inlinevolume-7h7q to disappear
Jul  5 19:20:57.551: INFO: Pod exec-volume-test-inlinevolume-7h7q still exists
Jul  5 19:20:59.441: INFO: Waiting for pod exec-volume-test-inlinevolume-7h7q to disappear
Jul  5 19:20:59.551: INFO: Pod exec-volume-test-inlinevolume-7h7q still exists
Jul  5 19:21:01.442: INFO: Waiting for pod exec-volume-test-inlinevolume-7h7q to disappear
Jul  5 19:21:01.551: INFO: Pod exec-volume-test-inlinevolume-7h7q no longer exists
Jul  5 19:21:01.552: FAIL: Unexpected error:
    <*errors.errorString | 0xc003dd98b0>: {
        s: "expected pod \"exec-volume-test-inlinevolume-7h7q\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-inlinevolume-7h7q\" to be \"Succeeded or Failed\"",
    }
    expected pod "exec-volume-test-inlinevolume-7h7q" success: Gave up after waiting 5m0s for pod "exec-volume-test-inlinevolume-7h7q" to be "Succeeded or Failed"
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0019de580, 0x6fd77e0, 0x10, 0xc003d1f400, 0x0, 0xc0053f70d8, 0x1, 0x1, 0x71d34c0)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
... skipping 13 lines ...
Jul  5 19:21:02.162: INFO: Successfully deleted PD "aws://eu-central-1a/vol-0ff48e4f216c588b9".
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "volume-8987".
STEP: Found 4 events.
Jul  5 19:21:02.272: INFO: At 2021-07-05 19:15:53 +0000 UTC - event for exec-volume-test-inlinevolume-7h7q: {default-scheduler } Scheduled: Successfully assigned volume-8987/exec-volume-test-inlinevolume-7h7q to ip-172-20-60-158.eu-central-1.compute.internal
Jul  5 19:21:02.273: INFO: At 2021-07-05 19:16:08 +0000 UTC - event for exec-volume-test-inlinevolume-7h7q: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "ebs.csi.aws.com-vol-0ff48e4f216c588b9" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
Jul  5 19:21:02.273: INFO: At 2021-07-05 19:17:56 +0000 UTC - event for exec-volume-test-inlinevolume-7h7q: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[kube-api-access-nj6cl vol1]: timed out waiting for the condition
Jul  5 19:21:02.273: INFO: At 2021-07-05 19:20:10 +0000 UTC - event for exec-volume-test-inlinevolume-7h7q: {kubelet ip-172-20-60-158.eu-central-1.compute.internal} FailedMount: Unable to attach or mount volumes: unmounted volumes=[vol1], unattached volumes=[vol1 kube-api-access-nj6cl]: timed out waiting for the condition
Jul  5 19:21:02.382: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Jul  5 19:21:02.382: INFO: 
Jul  5 19:21:02.492: INFO: 
Logging node info for node ip-172-20-36-144.eu-central-1.compute.internal
... skipping 143 lines ...
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:58
    [Testpattern: Inline-volume (default fs)] volumes
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:50
      should allow exec of files on the volume [It]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196

      Jul  5 19:21:01.552: Unexpected error:
          <*errors.errorString | 0xc003dd98b0>: {
              s: "expected pod \"exec-volume-test-inlinevolume-7h7q\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-inlinevolume-7h7q\" to be \"Succeeded or Failed\"",
          }
          expected pod "exec-volume-test-inlinevolume-7h7q" success: Gave up after waiting 5m0s for pod "exec-volume-test-inlinevolume-7h7q" to be "Succeeded or Failed"
      occurred

      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742
------------------------------
{"msg":"FAILED [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":29,"skipped":228,"failed":6,"failures":["[sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume"]}
Jul  5 19:21:06.555: INFO: Running AfterSuite actions on all nodes
Jul  5 19:21:06.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func17.2
Jul  5 19:21:06.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jul  5 19:21:06.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Jul  5 19:21:06.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jul  5 19:21:06.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jul  5 19:21:06.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jul  5 19:21:06.555: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3

{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-07-05T19:24:54Z"}